Nov 29 01:15:56 np0005539552 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 01:15:56 np0005539552 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 01:15:56 np0005539552 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:56 np0005539552 kernel: BIOS-provided physical RAM map:
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 01:15:56 np0005539552 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 01:15:56 np0005539552 kernel: NX (Execute Disable) protection: active
Nov 29 01:15:56 np0005539552 kernel: APIC: Static calls initialized
Nov 29 01:15:56 np0005539552 kernel: SMBIOS 2.8 present.
Nov 29 01:15:56 np0005539552 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 01:15:56 np0005539552 kernel: Hypervisor detected: KVM
Nov 29 01:15:56 np0005539552 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 01:15:56 np0005539552 kernel: kvm-clock: using sched offset of 3252605232 cycles
Nov 29 01:15:56 np0005539552 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 01:15:56 np0005539552 kernel: tsc: Detected 2800.000 MHz processor
Nov 29 01:15:56 np0005539552 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 01:15:56 np0005539552 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 01:15:56 np0005539552 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 01:15:56 np0005539552 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 01:15:56 np0005539552 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 01:15:56 np0005539552 kernel: Using GB pages for direct mapping
Nov 29 01:15:56 np0005539552 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 01:15:56 np0005539552 kernel: ACPI: Early table checksum verification disabled
Nov 29 01:15:56 np0005539552 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 01:15:56 np0005539552 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:56 np0005539552 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:56 np0005539552 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:56 np0005539552 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 01:15:56 np0005539552 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:56 np0005539552 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:56 np0005539552 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 01:15:56 np0005539552 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 01:15:56 np0005539552 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 01:15:56 np0005539552 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 01:15:56 np0005539552 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 01:15:56 np0005539552 kernel: No NUMA configuration found
Nov 29 01:15:56 np0005539552 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 01:15:56 np0005539552 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 29 01:15:56 np0005539552 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 01:15:56 np0005539552 kernel: Zone ranges:
Nov 29 01:15:56 np0005539552 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 01:15:56 np0005539552 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 01:15:56 np0005539552 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:15:56 np0005539552 kernel:  Device   empty
Nov 29 01:15:56 np0005539552 kernel: Movable zone start for each node
Nov 29 01:15:56 np0005539552 kernel: Early memory node ranges
Nov 29 01:15:56 np0005539552 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 01:15:56 np0005539552 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 01:15:56 np0005539552 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:15:56 np0005539552 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 01:15:56 np0005539552 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 01:15:56 np0005539552 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 01:15:56 np0005539552 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 01:15:56 np0005539552 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 01:15:56 np0005539552 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 01:15:56 np0005539552 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 01:15:56 np0005539552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 01:15:56 np0005539552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 01:15:56 np0005539552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 01:15:56 np0005539552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 01:15:56 np0005539552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 01:15:56 np0005539552 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 01:15:56 np0005539552 kernel: TSC deadline timer available
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Max. logical packages:   8
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Max. logical dies:       8
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Max. dies per package:   1
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Max. threads per core:   1
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Num. cores per package:     1
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Num. threads per package:   1
Nov 29 01:15:56 np0005539552 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 01:15:56 np0005539552 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 01:15:56 np0005539552 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 01:15:56 np0005539552 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 01:15:56 np0005539552 kernel: Booting paravirtualized kernel on KVM
Nov 29 01:15:56 np0005539552 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 01:15:56 np0005539552 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 01:15:56 np0005539552 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 01:15:56 np0005539552 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 01:15:56 np0005539552 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:56 np0005539552 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 01:15:56 np0005539552 kernel: random: crng init done
Nov 29 01:15:56 np0005539552 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: Fallback order for Node 0: 0 
Nov 29 01:15:56 np0005539552 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 01:15:56 np0005539552 kernel: Policy zone: Normal
Nov 29 01:15:56 np0005539552 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 01:15:56 np0005539552 kernel: software IO TLB: area num 8.
Nov 29 01:15:56 np0005539552 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 01:15:56 np0005539552 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 01:15:56 np0005539552 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 01:15:56 np0005539552 kernel: Dynamic Preempt: voluntary
Nov 29 01:15:56 np0005539552 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 01:15:56 np0005539552 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 01:15:56 np0005539552 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 01:15:56 np0005539552 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 01:15:56 np0005539552 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 01:15:56 np0005539552 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 01:15:56 np0005539552 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 01:15:56 np0005539552 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 01:15:56 np0005539552 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:56 np0005539552 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:56 np0005539552 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:56 np0005539552 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 01:15:56 np0005539552 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 01:15:56 np0005539552 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 01:15:56 np0005539552 kernel: Console: colour VGA+ 80x25
Nov 29 01:15:56 np0005539552 kernel: printk: console [ttyS0] enabled
Nov 29 01:15:56 np0005539552 kernel: ACPI: Core revision 20230331
Nov 29 01:15:56 np0005539552 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 01:15:56 np0005539552 kernel: x2apic enabled
Nov 29 01:15:56 np0005539552 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 01:15:56 np0005539552 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 01:15:56 np0005539552 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 29 01:15:56 np0005539552 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 01:15:56 np0005539552 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 01:15:56 np0005539552 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 01:15:56 np0005539552 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 01:15:56 np0005539552 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 01:15:56 np0005539552 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 01:15:56 np0005539552 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 01:15:56 np0005539552 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 01:15:56 np0005539552 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 01:15:56 np0005539552 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 01:15:56 np0005539552 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 01:15:56 np0005539552 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 01:15:56 np0005539552 kernel: x86/bugs: return thunk changed
Nov 29 01:15:56 np0005539552 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 01:15:56 np0005539552 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 01:15:56 np0005539552 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 01:15:56 np0005539552 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 01:15:56 np0005539552 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 01:15:56 np0005539552 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 01:15:56 np0005539552 kernel: Freeing SMP alternatives memory: 40K
Nov 29 01:15:56 np0005539552 kernel: pid_max: default: 32768 minimum: 301
Nov 29 01:15:56 np0005539552 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 01:15:56 np0005539552 kernel: landlock: Up and running.
Nov 29 01:15:56 np0005539552 kernel: Yama: becoming mindful.
Nov 29 01:15:56 np0005539552 kernel: SELinux:  Initializing.
Nov 29 01:15:56 np0005539552 kernel: LSM support for eBPF active
Nov 29 01:15:56 np0005539552 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 01:15:56 np0005539552 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 01:15:56 np0005539552 kernel: ... version:                0
Nov 29 01:15:56 np0005539552 kernel: ... bit width:              48
Nov 29 01:15:56 np0005539552 kernel: ... generic registers:      6
Nov 29 01:15:56 np0005539552 kernel: ... value mask:             0000ffffffffffff
Nov 29 01:15:56 np0005539552 kernel: ... max period:             00007fffffffffff
Nov 29 01:15:56 np0005539552 kernel: ... fixed-purpose events:   0
Nov 29 01:15:56 np0005539552 kernel: ... event mask:             000000000000003f
Nov 29 01:15:56 np0005539552 kernel: signal: max sigframe size: 1776
Nov 29 01:15:56 np0005539552 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 01:15:56 np0005539552 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 01:15:56 np0005539552 kernel: smp: Bringing up secondary CPUs ...
Nov 29 01:15:56 np0005539552 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 01:15:56 np0005539552 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 01:15:56 np0005539552 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 01:15:56 np0005539552 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 29 01:15:56 np0005539552 kernel: node 0 deferred pages initialised in 11ms
Nov 29 01:15:56 np0005539552 kernel: Memory: 7765868K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616276K reserved, 0K cma-reserved)
Nov 29 01:15:56 np0005539552 kernel: devtmpfs: initialized
Nov 29 01:15:56 np0005539552 kernel: x86/mm: Memory block size: 128MB
Nov 29 01:15:56 np0005539552 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 01:15:56 np0005539552 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 01:15:56 np0005539552 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 01:15:56 np0005539552 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 01:15:56 np0005539552 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 01:15:56 np0005539552 kernel: audit: initializing netlink subsys (disabled)
Nov 29 01:15:56 np0005539552 kernel: audit: type=2000 audit(1764396954.491:1): state=initialized audit_enabled=0 res=1
Nov 29 01:15:56 np0005539552 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 01:15:56 np0005539552 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 01:15:56 np0005539552 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 01:15:56 np0005539552 kernel: cpuidle: using governor menu
Nov 29 01:15:56 np0005539552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 01:15:56 np0005539552 kernel: PCI: Using configuration type 1 for base access
Nov 29 01:15:56 np0005539552 kernel: PCI: Using configuration type 1 for extended access
Nov 29 01:15:56 np0005539552 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 01:15:56 np0005539552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 01:15:56 np0005539552 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 01:15:56 np0005539552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 01:15:56 np0005539552 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 01:15:56 np0005539552 kernel: Demotion targets for Node 0: null
Nov 29 01:15:56 np0005539552 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 01:15:56 np0005539552 kernel: ACPI: Added _OSI(Module Device)
Nov 29 01:15:56 np0005539552 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 01:15:56 np0005539552 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 01:15:56 np0005539552 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 01:15:56 np0005539552 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 01:15:56 np0005539552 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 01:15:56 np0005539552 kernel: ACPI: Interpreter enabled
Nov 29 01:15:56 np0005539552 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 01:15:56 np0005539552 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 01:15:56 np0005539552 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 01:15:56 np0005539552 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 01:15:56 np0005539552 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 01:15:56 np0005539552 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 01:15:56 np0005539552 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [3] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [4] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [5] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [6] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [7] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [8] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [9] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [10] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [11] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [12] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [13] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [14] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [15] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [16] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [17] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [18] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [19] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [20] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [21] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [22] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [23] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [24] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [25] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [26] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [27] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [28] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [29] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [30] registered
Nov 29 01:15:56 np0005539552 kernel: acpiphp: Slot [31] registered
Nov 29 01:15:56 np0005539552 kernel: PCI host bridge to bus 0000:00
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 01:15:56 np0005539552 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 01:15:56 np0005539552 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 01:15:56 np0005539552 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 01:15:56 np0005539552 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 01:15:56 np0005539552 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 01:15:56 np0005539552 kernel: iommu: Default domain type: Translated
Nov 29 01:15:56 np0005539552 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 01:15:56 np0005539552 kernel: SCSI subsystem initialized
Nov 29 01:15:56 np0005539552 kernel: ACPI: bus type USB registered
Nov 29 01:15:56 np0005539552 kernel: usbcore: registered new interface driver usbfs
Nov 29 01:15:56 np0005539552 kernel: usbcore: registered new interface driver hub
Nov 29 01:15:56 np0005539552 kernel: usbcore: registered new device driver usb
Nov 29 01:15:56 np0005539552 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 01:15:56 np0005539552 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 01:15:56 np0005539552 kernel: PTP clock support registered
Nov 29 01:15:56 np0005539552 kernel: EDAC MC: Ver: 3.0.0
Nov 29 01:15:56 np0005539552 kernel: NetLabel: Initializing
Nov 29 01:15:56 np0005539552 kernel: NetLabel:  domain hash size = 128
Nov 29 01:15:56 np0005539552 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 01:15:56 np0005539552 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 01:15:56 np0005539552 kernel: PCI: Using ACPI for IRQ routing
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 01:15:56 np0005539552 kernel: vgaarb: loaded
Nov 29 01:15:56 np0005539552 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 01:15:56 np0005539552 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 01:15:56 np0005539552 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 01:15:56 np0005539552 kernel: pnp: PnP ACPI init
Nov 29 01:15:56 np0005539552 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 01:15:56 np0005539552 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_INET protocol family
Nov 29 01:15:56 np0005539552 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 01:15:56 np0005539552 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_XDP protocol family
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 01:15:56 np0005539552 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 01:15:56 np0005539552 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 01:15:56 np0005539552 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 101631 usecs
Nov 29 01:15:56 np0005539552 kernel: PCI: CLS 0 bytes, default 64
Nov 29 01:15:56 np0005539552 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 01:15:56 np0005539552 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 01:15:56 np0005539552 kernel: ACPI: bus type thunderbolt registered
Nov 29 01:15:56 np0005539552 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 01:15:56 np0005539552 kernel: Initialise system trusted keyrings
Nov 29 01:15:56 np0005539552 kernel: Key type blacklist registered
Nov 29 01:15:56 np0005539552 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 01:15:56 np0005539552 kernel: zbud: loaded
Nov 29 01:15:56 np0005539552 kernel: integrity: Platform Keyring initialized
Nov 29 01:15:56 np0005539552 kernel: integrity: Machine keyring initialized
Nov 29 01:15:56 np0005539552 kernel: Freeing initrd memory: 85868K
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_ALG protocol family
Nov 29 01:15:56 np0005539552 kernel: xor: automatically using best checksumming function   avx       
Nov 29 01:15:56 np0005539552 kernel: Key type asymmetric registered
Nov 29 01:15:56 np0005539552 kernel: Asymmetric key parser 'x509' registered
Nov 29 01:15:56 np0005539552 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 01:15:56 np0005539552 kernel: io scheduler mq-deadline registered
Nov 29 01:15:56 np0005539552 kernel: io scheduler kyber registered
Nov 29 01:15:56 np0005539552 kernel: io scheduler bfq registered
Nov 29 01:15:56 np0005539552 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 01:15:56 np0005539552 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 01:15:56 np0005539552 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 01:15:56 np0005539552 kernel: ACPI: button: Power Button [PWRF]
Nov 29 01:15:56 np0005539552 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 01:15:56 np0005539552 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 01:15:56 np0005539552 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 01:15:56 np0005539552 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 01:15:56 np0005539552 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 01:15:56 np0005539552 kernel: Non-volatile memory driver v1.3
Nov 29 01:15:56 np0005539552 kernel: rdac: device handler registered
Nov 29 01:15:56 np0005539552 kernel: hp_sw: device handler registered
Nov 29 01:15:56 np0005539552 kernel: emc: device handler registered
Nov 29 01:15:56 np0005539552 kernel: alua: device handler registered
Nov 29 01:15:56 np0005539552 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 01:15:56 np0005539552 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 01:15:56 np0005539552 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 01:15:56 np0005539552 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 01:15:56 np0005539552 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 01:15:56 np0005539552 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 01:15:56 np0005539552 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 01:15:56 np0005539552 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 01:15:56 np0005539552 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 01:15:56 np0005539552 kernel: hub 1-0:1.0: USB hub found
Nov 29 01:15:56 np0005539552 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 01:15:56 np0005539552 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 01:15:56 np0005539552 kernel: usbserial: USB Serial support registered for generic
Nov 29 01:15:56 np0005539552 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 01:15:56 np0005539552 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 01:15:56 np0005539552 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 01:15:56 np0005539552 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 01:15:56 np0005539552 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 01:15:56 np0005539552 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 01:15:56 np0005539552 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 01:15:56 np0005539552 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:15:55 UTC (1764396955)
Nov 29 01:15:56 np0005539552 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 01:15:56 np0005539552 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 01:15:56 np0005539552 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 01:15:56 np0005539552 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 01:15:56 np0005539552 kernel: usbcore: registered new interface driver usbhid
Nov 29 01:15:56 np0005539552 kernel: usbhid: USB HID core driver
Nov 29 01:15:56 np0005539552 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 01:15:56 np0005539552 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 01:15:56 np0005539552 kernel: Initializing XFRM netlink socket
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_INET6 protocol family
Nov 29 01:15:56 np0005539552 kernel: Segment Routing with IPv6
Nov 29 01:15:56 np0005539552 kernel: NET: Registered PF_PACKET protocol family
Nov 29 01:15:56 np0005539552 kernel: mpls_gso: MPLS GSO support
Nov 29 01:15:56 np0005539552 kernel: IPI shorthand broadcast: enabled
Nov 29 01:15:56 np0005539552 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 01:15:56 np0005539552 kernel: AES CTR mode by8 optimization enabled
Nov 29 01:15:56 np0005539552 kernel: sched_clock: Marking stable (1293004430, 148372500)->(1599694569, -158317639)
Nov 29 01:15:56 np0005539552 kernel: registered taskstats version 1
Nov 29 01:15:56 np0005539552 kernel: Loading compiled-in X.509 certificates
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 01:15:56 np0005539552 kernel: Demotion targets for Node 0: null
Nov 29 01:15:56 np0005539552 kernel: page_owner is disabled
Nov 29 01:15:56 np0005539552 kernel: Key type .fscrypt registered
Nov 29 01:15:56 np0005539552 kernel: Key type fscrypt-provisioning registered
Nov 29 01:15:56 np0005539552 kernel: Key type big_key registered
Nov 29 01:15:56 np0005539552 kernel: Key type encrypted registered
Nov 29 01:15:56 np0005539552 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 01:15:56 np0005539552 kernel: Loading compiled-in module X.509 certificates
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:15:56 np0005539552 kernel: ima: Allocated hash algorithm: sha256
Nov 29 01:15:56 np0005539552 kernel: ima: No architecture policies found
Nov 29 01:15:56 np0005539552 kernel: evm: Initialising EVM extended attributes:
Nov 29 01:15:56 np0005539552 kernel: evm: security.selinux
Nov 29 01:15:56 np0005539552 kernel: evm: security.SMACK64 (disabled)
Nov 29 01:15:56 np0005539552 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 01:15:56 np0005539552 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 01:15:56 np0005539552 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 01:15:56 np0005539552 kernel: evm: security.apparmor (disabled)
Nov 29 01:15:56 np0005539552 kernel: evm: security.ima
Nov 29 01:15:56 np0005539552 kernel: evm: security.capability
Nov 29 01:15:56 np0005539552 kernel: evm: HMAC attrs: 0x1
Nov 29 01:15:56 np0005539552 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 01:15:56 np0005539552 kernel: Running certificate verification RSA selftest
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 01:15:56 np0005539552 kernel: Running certificate verification ECDSA selftest
Nov 29 01:15:56 np0005539552 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 01:15:56 np0005539552 kernel: clk: Disabling unused clocks
Nov 29 01:15:56 np0005539552 kernel: Freeing unused decrypted memory: 2028K
Nov 29 01:15:56 np0005539552 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 01:15:56 np0005539552 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 01:15:56 np0005539552 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 01:15:56 np0005539552 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 01:15:56 np0005539552 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 01:15:56 np0005539552 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 01:15:56 np0005539552 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 01:15:56 np0005539552 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 01:15:56 np0005539552 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 01:15:56 np0005539552 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 01:15:56 np0005539552 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 01:15:56 np0005539552 kernel: Run /init as init process
Nov 29 01:15:56 np0005539552 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:15:56 np0005539552 systemd: Detected virtualization kvm.
Nov 29 01:15:56 np0005539552 systemd: Detected architecture x86-64.
Nov 29 01:15:56 np0005539552 systemd: Running in initrd.
Nov 29 01:15:56 np0005539552 systemd: No hostname configured, using default hostname.
Nov 29 01:15:56 np0005539552 systemd: Hostname set to <localhost>.
Nov 29 01:15:56 np0005539552 systemd: Initializing machine ID from VM UUID.
Nov 29 01:15:56 np0005539552 systemd: Queued start job for default target Initrd Default Target.
Nov 29 01:15:56 np0005539552 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:56 np0005539552 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:15:56 np0005539552 systemd: Reached target Initrd /usr File System.
Nov 29 01:15:56 np0005539552 systemd: Reached target Local File Systems.
Nov 29 01:15:56 np0005539552 systemd: Reached target Path Units.
Nov 29 01:15:56 np0005539552 systemd: Reached target Slice Units.
Nov 29 01:15:56 np0005539552 systemd: Reached target Swaps.
Nov 29 01:15:56 np0005539552 systemd: Reached target Timer Units.
Nov 29 01:15:56 np0005539552 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 01:15:56 np0005539552 systemd: Listening on Journal Socket (/dev/log).
Nov 29 01:15:56 np0005539552 systemd: Listening on Journal Socket.
Nov 29 01:15:56 np0005539552 systemd: Listening on udev Control Socket.
Nov 29 01:15:56 np0005539552 systemd: Listening on udev Kernel Socket.
Nov 29 01:15:56 np0005539552 systemd: Reached target Socket Units.
Nov 29 01:15:56 np0005539552 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:15:56 np0005539552 systemd: Starting Journal Service...
Nov 29 01:15:56 np0005539552 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:15:56 np0005539552 systemd: Starting Apply Kernel Variables...
Nov 29 01:15:56 np0005539552 systemd: Starting Create System Users...
Nov 29 01:15:56 np0005539552 systemd: Starting Setup Virtual Console...
Nov 29 01:15:56 np0005539552 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:15:56 np0005539552 systemd: Finished Apply Kernel Variables.
Nov 29 01:15:56 np0005539552 systemd: Finished Create System Users.
Nov 29 01:15:56 np0005539552 systemd-journald[306]: Journal started
Nov 29 01:15:56 np0005539552 systemd-journald[306]: Runtime Journal (/run/log/journal/6fbde64ad9784f1aa29da77e1f5a1987) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:56 np0005539552 systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 29 01:15:56 np0005539552 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 29 01:15:56 np0005539552 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 01:15:56 np0005539552 systemd: Started Journal Service.
Nov 29 01:15:56 np0005539552 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:15:56 np0005539552 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:15:56 np0005539552 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:15:56 np0005539552 systemd[1]: Finished Setup Virtual Console.
Nov 29 01:15:56 np0005539552 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 01:15:56 np0005539552 systemd[1]: Starting dracut cmdline hook...
Nov 29 01:15:56 np0005539552 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:15:56 np0005539552 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 01:15:56 np0005539552 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:56 np0005539552 systemd[1]: Finished dracut cmdline hook.
Nov 29 01:15:56 np0005539552 systemd[1]: Starting dracut pre-udev hook...
Nov 29 01:15:56 np0005539552 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 01:15:56 np0005539552 kernel: device-mapper: uevent: version 1.0.3
Nov 29 01:15:56 np0005539552 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 01:15:56 np0005539552 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 01:15:56 np0005539552 kernel: RPC: Registered udp transport module.
Nov 29 01:15:56 np0005539552 kernel: RPC: Registered tcp transport module.
Nov 29 01:15:56 np0005539552 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 01:15:56 np0005539552 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 01:15:56 np0005539552 rpc.statd[444]: Version 2.5.4 starting
Nov 29 01:15:56 np0005539552 rpc.statd[444]: Initializing NSM state
Nov 29 01:15:56 np0005539552 rpc.idmapd[449]: Setting log level to 0
Nov 29 01:15:56 np0005539552 systemd[1]: Finished dracut pre-udev hook.
Nov 29 01:15:56 np0005539552 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:15:56 np0005539552 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:15:56 np0005539552 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:15:56 np0005539552 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 01:15:57 np0005539552 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 01:15:57 np0005539552 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 01:15:57 np0005539552 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 01:15:57 np0005539552 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:15:57 np0005539552 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:15:57 np0005539552 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:15:57 np0005539552 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:15:57 np0005539552 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target Network.
Nov 29 01:15:57 np0005539552 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:15:57 np0005539552 systemd[1]: Starting dracut initqueue hook...
Nov 29 01:15:57 np0005539552 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 01:15:57 np0005539552 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target System Initialization.
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target Basic System.
Nov 29 01:15:57 np0005539552 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 01:15:57 np0005539552 kernel: scsi host0: ata_piix
Nov 29 01:15:57 np0005539552 kernel: scsi host1: ata_piix
Nov 29 01:15:57 np0005539552 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 01:15:57 np0005539552 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 01:15:57 np0005539552 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 01:15:57 np0005539552 kernel: vda: vda1
Nov 29 01:15:57 np0005539552 kernel: ata1: found unknown device (class 0)
Nov 29 01:15:57 np0005539552 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 01:15:57 np0005539552 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 01:15:57 np0005539552 systemd-udevd[488]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:15:57 np0005539552 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 01:15:57 np0005539552 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:15:57 np0005539552 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 01:15:57 np0005539552 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target Initrd Root Device.
Nov 29 01:15:57 np0005539552 systemd[1]: Finished dracut initqueue hook.
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 01:15:57 np0005539552 systemd[1]: Reached target Remote File Systems.
Nov 29 01:15:57 np0005539552 systemd[1]: Starting dracut pre-mount hook...
Nov 29 01:15:57 np0005539552 systemd[1]: Finished dracut pre-mount hook.
Nov 29 01:15:57 np0005539552 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 01:15:57 np0005539552 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 01:15:57 np0005539552 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:15:57 np0005539552 systemd[1]: Mounting /sysroot...
Nov 29 01:15:58 np0005539552 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 01:15:58 np0005539552 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 01:15:58 np0005539552 kernel: XFS (vda1): Ending clean mount
Nov 29 01:15:58 np0005539552 systemd[1]: Mounted /sysroot.
Nov 29 01:15:58 np0005539552 systemd[1]: Reached target Initrd Root File System.
Nov 29 01:15:58 np0005539552 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 01:15:58 np0005539552 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 01:15:58 np0005539552 systemd[1]: Reached target Initrd File Systems.
Nov 29 01:15:58 np0005539552 systemd[1]: Reached target Initrd Default Target.
Nov 29 01:15:58 np0005539552 systemd[1]: Starting dracut mount hook...
Nov 29 01:15:58 np0005539552 systemd[1]: Finished dracut mount hook.
Nov 29 01:15:58 np0005539552 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 01:15:58 np0005539552 rpc.idmapd[449]: exiting on signal 15
Nov 29 01:15:58 np0005539552 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 01:15:58 np0005539552 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Network.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Timer Units.
Nov 29 01:15:58 np0005539552 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Initrd Default Target.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Basic System.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Initrd Root Device.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Path Units.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Remote File Systems.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Slice Units.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Socket Units.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target System Initialization.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Local File Systems.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Swaps.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut mount hook.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut initqueue hook.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Setup Virtual Console.
Nov 29 01:15:58 np0005539552 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 01:15:58 np0005539552 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Closed udev Control Socket.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Closed udev Kernel Socket.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 01:15:58 np0005539552 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped dracut cmdline hook.
Nov 29 01:15:58 np0005539552 systemd[1]: Starting Cleanup udev Database...
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 01:15:58 np0005539552 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 01:15:58 np0005539552 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Stopped Create System Users.
Nov 29 01:15:58 np0005539552 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 01:15:58 np0005539552 systemd[1]: Finished Cleanup udev Database.
Nov 29 01:15:58 np0005539552 systemd[1]: Reached target Switch Root.
Nov 29 01:15:58 np0005539552 systemd[1]: Starting Switch Root...
Nov 29 01:15:58 np0005539552 systemd[1]: Switching root.
Nov 29 01:15:58 np0005539552 systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Nov 29 01:15:58 np0005539552 systemd-journald[306]: Journal stopped
Nov 29 01:15:59 np0005539552 kernel: audit: type=1404 audit(1764396958.740:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:15:59 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:15:59 np0005539552 kernel: audit: type=1403 audit(1764396958.862:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 01:15:59 np0005539552 systemd: Successfully loaded SELinux policy in 125.519ms.
Nov 29 01:15:59 np0005539552 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.166ms.
Nov 29 01:15:59 np0005539552 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:15:59 np0005539552 systemd: Detected virtualization kvm.
Nov 29 01:15:59 np0005539552 systemd: Detected architecture x86-64.
Nov 29 01:15:59 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:15:59 np0005539552 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd: Stopped Switch Root.
Nov 29 01:15:59 np0005539552 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 01:15:59 np0005539552 systemd: Created slice Slice /system/getty.
Nov 29 01:15:59 np0005539552 systemd: Created slice Slice /system/serial-getty.
Nov 29 01:15:59 np0005539552 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 01:15:59 np0005539552 systemd: Created slice User and Session Slice.
Nov 29 01:15:59 np0005539552 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:59 np0005539552 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 01:15:59 np0005539552 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 01:15:59 np0005539552 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:15:59 np0005539552 systemd: Stopped target Switch Root.
Nov 29 01:15:59 np0005539552 systemd: Stopped target Initrd File Systems.
Nov 29 01:15:59 np0005539552 systemd: Stopped target Initrd Root File System.
Nov 29 01:15:59 np0005539552 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 01:15:59 np0005539552 systemd: Reached target Path Units.
Nov 29 01:15:59 np0005539552 systemd: Reached target rpc_pipefs.target.
Nov 29 01:15:59 np0005539552 systemd: Reached target Slice Units.
Nov 29 01:15:59 np0005539552 systemd: Reached target Swaps.
Nov 29 01:15:59 np0005539552 systemd: Reached target Local Verity Protected Volumes.
Nov 29 01:15:59 np0005539552 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 01:15:59 np0005539552 systemd: Reached target RPC Port Mapper.
Nov 29 01:15:59 np0005539552 systemd: Listening on Process Core Dump Socket.
Nov 29 01:15:59 np0005539552 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 01:15:59 np0005539552 systemd: Listening on udev Control Socket.
Nov 29 01:15:59 np0005539552 systemd: Listening on udev Kernel Socket.
Nov 29 01:15:59 np0005539552 systemd: Mounting Huge Pages File System...
Nov 29 01:15:59 np0005539552 systemd: Mounting POSIX Message Queue File System...
Nov 29 01:15:59 np0005539552 systemd: Mounting Kernel Debug File System...
Nov 29 01:15:59 np0005539552 systemd: Mounting Kernel Trace File System...
Nov 29 01:15:59 np0005539552 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:15:59 np0005539552 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:15:59 np0005539552 systemd: Starting Load Kernel Module configfs...
Nov 29 01:15:59 np0005539552 systemd: Starting Load Kernel Module drm...
Nov 29 01:15:59 np0005539552 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 01:15:59 np0005539552 systemd: Starting Load Kernel Module fuse...
Nov 29 01:15:59 np0005539552 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 01:15:59 np0005539552 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd: Stopped File System Check on Root Device.
Nov 29 01:15:59 np0005539552 systemd: Stopped Journal Service.
Nov 29 01:15:59 np0005539552 systemd: Starting Journal Service...
Nov 29 01:15:59 np0005539552 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:15:59 np0005539552 systemd: Starting Generate network units from Kernel command line...
Nov 29 01:15:59 np0005539552 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:15:59 np0005539552 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 01:15:59 np0005539552 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 01:15:59 np0005539552 systemd: Starting Apply Kernel Variables...
Nov 29 01:15:59 np0005539552 systemd: Starting Coldplug All udev Devices...
Nov 29 01:15:59 np0005539552 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 01:15:59 np0005539552 systemd: Mounted Huge Pages File System.
Nov 29 01:15:59 np0005539552 systemd: Mounted POSIX Message Queue File System.
Nov 29 01:15:59 np0005539552 systemd: Mounted Kernel Debug File System.
Nov 29 01:15:59 np0005539552 systemd-journald[678]: Journal started
Nov 29 01:15:59 np0005539552 systemd-journald[678]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:59 np0005539552 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 01:15:59 np0005539552 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd: Started Journal Service.
Nov 29 01:15:59 np0005539552 systemd[1]: Mounted Kernel Trace File System.
Nov 29 01:15:59 np0005539552 kernel: ACPI: bus type drm_connector registered
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 01:15:59 np0005539552 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:15:59 np0005539552 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Load Kernel Module drm.
Nov 29 01:15:59 np0005539552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 01:15:59 np0005539552 kernel: fuse: init (API version 7.37)
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 01:15:59 np0005539552 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:15:59 np0005539552 systemd[1]: Mounting FUSE Control File System...
Nov 29 01:15:59 np0005539552 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 01:15:59 np0005539552 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Create System Users...
Nov 29 01:15:59 np0005539552 systemd-journald[678]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:59 np0005539552 systemd-journald[678]: Received client request to flush runtime journal.
Nov 29 01:15:59 np0005539552 systemd[1]: Mounted FUSE Control File System.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 01:15:59 np0005539552 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Create System Users.
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:15:59 np0005539552 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 01:15:59 np0005539552 systemd[1]: Reached target Local File Systems.
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 01:15:59 np0005539552 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 01:15:59 np0005539552 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 01:15:59 np0005539552 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 01:15:59 np0005539552 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:15:59 np0005539552 bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Security Auditing Service...
Nov 29 01:15:59 np0005539552 systemd[1]: Starting RPC Bind...
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 01:15:59 np0005539552 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 01:15:59 np0005539552 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 01:15:59 np0005539552 systemd[1]: Started RPC Bind.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 01:15:59 np0005539552 augenrules[707]: /sbin/augenrules: No change
Nov 29 01:15:59 np0005539552 augenrules[722]: No rules
Nov 29 01:15:59 np0005539552 augenrules[722]: enabled 1
Nov 29 01:15:59 np0005539552 augenrules[722]: failure 1
Nov 29 01:15:59 np0005539552 augenrules[722]: pid 702
Nov 29 01:15:59 np0005539552 augenrules[722]: rate_limit 0
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_limit 8192
Nov 29 01:15:59 np0005539552 augenrules[722]: lost 0
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog 3
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_wait_time 60000
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:15:59 np0005539552 augenrules[722]: enabled 1
Nov 29 01:15:59 np0005539552 augenrules[722]: failure 1
Nov 29 01:15:59 np0005539552 augenrules[722]: pid 702
Nov 29 01:15:59 np0005539552 augenrules[722]: rate_limit 0
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_limit 8192
Nov 29 01:15:59 np0005539552 augenrules[722]: lost 0
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog 3
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_wait_time 60000
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:15:59 np0005539552 augenrules[722]: enabled 1
Nov 29 01:15:59 np0005539552 augenrules[722]: failure 1
Nov 29 01:15:59 np0005539552 augenrules[722]: pid 702
Nov 29 01:15:59 np0005539552 augenrules[722]: rate_limit 0
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_limit 8192
Nov 29 01:15:59 np0005539552 augenrules[722]: lost 0
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog 2
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_wait_time 60000
Nov 29 01:15:59 np0005539552 augenrules[722]: backlog_wait_time_actual 0
Nov 29 01:15:59 np0005539552 systemd[1]: Started Security Auditing Service.
Nov 29 01:15:59 np0005539552 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 01:15:59 np0005539552 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 01:16:00 np0005539552 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 01:16:00 np0005539552 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:16:00 np0005539552 systemd[1]: Starting Update is Completed...
Nov 29 01:16:00 np0005539552 systemd[1]: Finished Update is Completed.
Nov 29 01:16:00 np0005539552 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:16:00 np0005539552 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:16:00 np0005539552 systemd[1]: Reached target System Initialization.
Nov 29 01:16:00 np0005539552 systemd[1]: Started dnf makecache --timer.
Nov 29 01:16:00 np0005539552 systemd[1]: Started Daily rotation of log files.
Nov 29 01:16:00 np0005539552 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 01:16:00 np0005539552 systemd[1]: Reached target Timer Units.
Nov 29 01:16:00 np0005539552 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 01:16:00 np0005539552 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 01:16:00 np0005539552 systemd[1]: Reached target Socket Units.
Nov 29 01:16:00 np0005539552 systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:16:00 np0005539552 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 01:16:00 np0005539552 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:16:00 np0005539552 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 01:16:00 np0005539552 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:16:00 np0005539552 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:16:00 np0005539552 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:16:00 np0005539552 systemd[1]: Started D-Bus System Message Bus.
Nov 29 01:16:00 np0005539552 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 01:16:00 np0005539552 systemd[1]: Reached target Basic System.
Nov 29 01:16:00 np0005539552 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 01:16:00 np0005539552 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 01:16:00 np0005539552 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 01:16:00 np0005539552 dbus-broker-lau[769]: Ready
Nov 29 01:16:00 np0005539552 systemd[1]: Starting NTP client/server...
Nov 29 01:16:00 np0005539552 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 01:16:00 np0005539552 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 01:16:00 np0005539552 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 01:16:00 np0005539552 systemd[1]: Started irqbalance daemon.
Nov 29 01:16:00 np0005539552 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 01:16:00 np0005539552 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:16:00 np0005539552 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:16:00 np0005539552 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:16:00 np0005539552 systemd[1]: Reached target sshd-keygen.target.
Nov 29 01:16:00 np0005539552 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 01:16:00 np0005539552 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 01:16:00 np0005539552 systemd[1]: Starting User Login Management...
Nov 29 01:16:00 np0005539552 chronyd[793]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 01:16:00 np0005539552 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 01:16:00 np0005539552 chronyd[793]: Loaded 0 symmetric keys
Nov 29 01:16:00 np0005539552 chronyd[793]: Using right/UTC timezone to obtain leap second data
Nov 29 01:16:00 np0005539552 chronyd[793]: Loaded seccomp filter (level 2)
Nov 29 01:16:00 np0005539552 systemd[1]: Started NTP client/server.
Nov 29 01:16:00 np0005539552 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 01:16:00 np0005539552 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 01:16:00 np0005539552 systemd-logind[788]: New seat seat0.
Nov 29 01:16:00 np0005539552 systemd[1]: Started User Login Management.
Nov 29 01:16:00 np0005539552 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 01:16:00 np0005539552 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 01:16:00 np0005539552 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 01:16:00 np0005539552 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 01:16:00 np0005539552 kernel: Console: switching to colour dummy device 80x25
Nov 29 01:16:00 np0005539552 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 01:16:00 np0005539552 kernel: [drm] features: -context_init
Nov 29 01:16:00 np0005539552 kernel: [drm] number of scanouts: 1
Nov 29 01:16:00 np0005539552 kernel: [drm] number of cap sets: 0
Nov 29 01:16:00 np0005539552 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 01:16:00 np0005539552 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 01:16:00 np0005539552 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 01:16:00 np0005539552 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 01:16:00 np0005539552 kernel: kvm_amd: TSC scaling supported
Nov 29 01:16:00 np0005539552 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 01:16:00 np0005539552 kernel: kvm_amd: Nested Paging enabled
Nov 29 01:16:00 np0005539552 kernel: kvm_amd: LBR virtualization supported
Nov 29 01:16:00 np0005539552 iptables.init[782]: iptables: Applying firewall rules: [  OK  ]
Nov 29 01:16:00 np0005539552 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 01:16:00 np0005539552 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:16:00 +0000. Up 6.62 seconds.
Nov 29 01:16:01 np0005539552 systemd[1]: run-cloud\x2dinit-tmp-tmp9zueatj5.mount: Deactivated successfully.
Nov 29 01:16:01 np0005539552 systemd[1]: Starting Hostname Service...
Nov 29 01:16:01 np0005539552 systemd[1]: Started Hostname Service.
Nov 29 01:16:01 np0005539552 systemd-hostnamed[853]: Hostname set to <np0005539552.novalocal> (static)
Nov 29 01:16:01 np0005539552 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 01:16:01 np0005539552 systemd[1]: Reached target Preparation for Network.
Nov 29 01:16:01 np0005539552 systemd[1]: Starting Network Manager...
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.3999] NetworkManager (version 1.54.1-1.el9) is starting... (boot:0f1c0a59-83fa-405b-b772-82c2e4852e7b)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4004] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4117] manager[0x56288775d080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4168] hostname: hostname: using hostnamed
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4169] hostname: static hostname changed from (none) to "np0005539552.novalocal"
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4177] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4349] manager[0x56288775d080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4352] manager[0x56288775d080]: rfkill: WWAN hardware radio set enabled
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4416] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4417] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:16:01 np0005539552 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4418] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4422] manager: Networking is enabled by state file
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4427] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4448] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4483] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4508] dhcp: init: Using DHCP client 'internal'
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4514] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4542] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4555] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4571] device (lo): Activation: starting connection 'lo' (62c00a15-19b2-47c5-a13c-15bc1da7e4d2)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4590] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4597] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4637] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4641] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4644] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4645] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4648] device (eth0): carrier: link connected
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4651] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4657] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:16:01 np0005539552 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4663] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4667] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4668] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4670] manager: NetworkManager state is now CONNECTING
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4672] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4681] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4688] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:16:01 np0005539552 systemd[1]: Started Network Manager.
Nov 29 01:16:01 np0005539552 systemd[1]: Reached target Network.
Nov 29 01:16:01 np0005539552 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:16:01 np0005539552 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 01:16:01 np0005539552 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4909] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4911] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:16:01 np0005539552 NetworkManager[857]: <info>  [1764396961.4919] device (lo): Activation: successful, device activated.
Nov 29 01:16:01 np0005539552 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 01:16:01 np0005539552 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:16:01 np0005539552 systemd[1]: Reached target NFS client services.
Nov 29 01:16:01 np0005539552 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:16:01 np0005539552 systemd[1]: Reached target Remote File Systems.
Nov 29 01:16:01 np0005539552 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1893] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1915] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1954] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1985] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1988] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1992] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.1996] device (eth0): Activation: successful, device activated.
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.2003] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:16:03 np0005539552 NetworkManager[857]: <info>  [1764396963.2006] manager: startup complete
Nov 29 01:16:03 np0005539552 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:16:03 np0005539552 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 01:16:03 np0005539552 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:16:03 +0000. Up 9.23 seconds.
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.189         | 255.255.255.0 | global | fa:16:3e:fb:de:6a |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fefb:de6a/64 |       .       |  link  | fa:16:3e:fb:de:6a |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 01:16:03 np0005539552 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:04 np0005539552 cloud-init[920]: Generating public/private rsa key pair.
Nov 29 01:16:04 np0005539552 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 01:16:04 np0005539552 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 01:16:04 np0005539552 cloud-init[920]: The key fingerprint is:
Nov 29 01:16:04 np0005539552 cloud-init[920]: SHA256:6l+b1IPbLelBx/kEe0TTAR9oiWs7xMV+hXuCOsO/OSk root@np0005539552.novalocal
Nov 29 01:16:04 np0005539552 cloud-init[920]: The key's randomart image is:
Nov 29 01:16:04 np0005539552 cloud-init[920]: +---[RSA 3072]----+
Nov 29 01:16:04 np0005539552 cloud-init[920]: |            o.+=+|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |           . *o.=|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |          . =..oo|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |           =.oo*.|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |        S.o.o *oo|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       .  =* . + |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |      .   ++=o  .|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |     .   oE=*=   |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |      ... +o=+.  |
Nov 29 01:16:04 np0005539552 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:16:04 np0005539552 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 29 01:16:04 np0005539552 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 01:16:04 np0005539552 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 01:16:04 np0005539552 cloud-init[920]: The key fingerprint is:
Nov 29 01:16:04 np0005539552 cloud-init[920]: SHA256:lwDh5//H7Hbl7WjcyuzIymr7ym5CjzGgBMtRfOzvT6Q root@np0005539552.novalocal
Nov 29 01:16:04 np0005539552 cloud-init[920]: The key's randomart image is:
Nov 29 01:16:04 np0005539552 cloud-init[920]: +---[ECDSA 256]---+
Nov 29 01:16:04 np0005539552 cloud-init[920]: |  o.. o.         |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |.. . + .         |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |.o. o . o        |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |... .. o . .     |
Nov 29 01:16:04 np0005539552 cloud-init[920]: | . . .. S.o      |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |  .   +.oo      .|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |     ..E ..  + oo|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |      oo=. o +B.=|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       =B*+.+*B+.|
Nov 29 01:16:04 np0005539552 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:16:04 np0005539552 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 29 01:16:04 np0005539552 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 01:16:04 np0005539552 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 01:16:04 np0005539552 cloud-init[920]: The key fingerprint is:
Nov 29 01:16:04 np0005539552 cloud-init[920]: SHA256:TLt1yDg/uISRnaPdM707dMllOt+GSegeynKBkxWPC9A root@np0005539552.novalocal
Nov 29 01:16:04 np0005539552 cloud-init[920]: The key's randomart image is:
Nov 29 01:16:04 np0005539552 cloud-init[920]: +--[ED25519 256]--+
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       .         |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |      . E .      |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       ..  +     |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       =.=o..   o|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |      o S=+..o = |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       =+Oooo B  |
Nov 29 01:16:04 np0005539552 cloud-init[920]: |      o =.*+oo =.|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |       ..o.=ooo +|
Nov 29 01:16:04 np0005539552 cloud-init[920]: |        .oo.+o . |
Nov 29 01:16:04 np0005539552 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:16:04 np0005539552 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 01:16:04 np0005539552 systemd[1]: Reached target Cloud-config availability.
Nov 29 01:16:04 np0005539552 systemd[1]: Reached target Network is Online.
Nov 29 01:16:04 np0005539552 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 01:16:04 np0005539552 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 01:16:04 np0005539552 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 01:16:04 np0005539552 systemd[1]: Starting System Logging Service...
Nov 29 01:16:04 np0005539552 sm-notify[1003]: Version 2.5.4 starting
Nov 29 01:16:04 np0005539552 systemd[1]: Starting OpenSSH server daemon...
Nov 29 01:16:04 np0005539552 systemd[1]: Starting Permit User Sessions...
Nov 29 01:16:04 np0005539552 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 01:16:04 np0005539552 systemd[1]: Finished Permit User Sessions.
Nov 29 01:16:04 np0005539552 systemd[1]: Started Command Scheduler.
Nov 29 01:16:04 np0005539552 systemd[1]: Started Getty on tty1.
Nov 29 01:16:04 np0005539552 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 01:16:04 np0005539552 systemd[1]: Reached target Login Prompts.
Nov 29 01:16:04 np0005539552 systemd[1]: Started OpenSSH server daemon.
Nov 29 01:16:04 np0005539552 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Nov 29 01:16:04 np0005539552 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 01:16:04 np0005539552 systemd[1]: Started System Logging Service.
Nov 29 01:16:04 np0005539552 systemd[1]: Reached target Multi-User System.
Nov 29 01:16:04 np0005539552 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 01:16:04 np0005539552 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 01:16:04 np0005539552 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 01:16:04 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:16:05 np0005539552 kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Nov 29 01:16:05 np0005539552 kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 01:16:05 np0005539552 cloud-init[1127]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:16:05 +0000. Up 10.84 seconds.
Nov 29 01:16:05 np0005539552 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 01:16:05 np0005539552 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 01:16:05 np0005539552 dracut[1266]: dracut-057-102.git20250818.el9
Nov 29 01:16:05 np0005539552 cloud-init[1284]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:16:05 +0000. Up 11.26 seconds.
Nov 29 01:16:05 np0005539552 dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 01:16:05 np0005539552 cloud-init[1299]: #############################################################
Nov 29 01:16:05 np0005539552 cloud-init[1303]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 01:16:05 np0005539552 cloud-init[1310]: 256 SHA256:lwDh5//H7Hbl7WjcyuzIymr7ym5CjzGgBMtRfOzvT6Q root@np0005539552.novalocal (ECDSA)
Nov 29 01:16:05 np0005539552 cloud-init[1314]: 256 SHA256:TLt1yDg/uISRnaPdM707dMllOt+GSegeynKBkxWPC9A root@np0005539552.novalocal (ED25519)
Nov 29 01:16:05 np0005539552 cloud-init[1320]: 3072 SHA256:6l+b1IPbLelBx/kEe0TTAR9oiWs7xMV+hXuCOsO/OSk root@np0005539552.novalocal (RSA)
Nov 29 01:16:05 np0005539552 cloud-init[1322]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 01:16:05 np0005539552 cloud-init[1325]: #############################################################
Nov 29 01:16:05 np0005539552 cloud-init[1284]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:16:05 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.46 seconds
Nov 29 01:16:05 np0005539552 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 01:16:05 np0005539552 systemd[1]: Reached target Cloud-init target.
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: memstrack is not available
Nov 29 01:16:06 np0005539552 dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:16:06 np0005539552 dracut[1268]: memstrack is not available
Nov 29 01:16:06 np0005539552 dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:16:07 np0005539552 dracut[1268]: *** Including module: systemd ***
Nov 29 01:16:07 np0005539552 dracut[1268]: *** Including module: fips ***
Nov 29 01:16:07 np0005539552 dracut[1268]: *** Including module: systemd-initrd ***
Nov 29 01:16:07 np0005539552 dracut[1268]: *** Including module: i18n ***
Nov 29 01:16:07 np0005539552 dracut[1268]: *** Including module: drm ***
Nov 29 01:16:08 np0005539552 chronyd[793]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 29 01:16:08 np0005539552 chronyd[793]: System clock TAI offset set to 37 seconds
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: prefixdevname ***
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: kernel-modules ***
Nov 29 01:16:08 np0005539552 kernel: block vda: the capability attribute has been deprecated.
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: kernel-modules-extra ***
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: qemu ***
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: fstab-sys ***
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: rootfs-block ***
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: terminfo ***
Nov 29 01:16:08 np0005539552 dracut[1268]: *** Including module: udev-rules ***
Nov 29 01:16:09 np0005539552 dracut[1268]: Skipping udev rule: 91-permissions.rules
Nov 29 01:16:09 np0005539552 dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 01:16:09 np0005539552 dracut[1268]: *** Including module: virtiofs ***
Nov 29 01:16:09 np0005539552 dracut[1268]: *** Including module: dracut-systemd ***
Nov 29 01:16:09 np0005539552 dracut[1268]: *** Including module: usrmount ***
Nov 29 01:16:09 np0005539552 dracut[1268]: *** Including module: base ***
Nov 29 01:16:09 np0005539552 dracut[1268]: *** Including module: fs-lib ***
Nov 29 01:16:09 np0005539552 dracut[1268]: *** Including module: kdumpbase ***
Nov 29 01:16:10 np0005539552 irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 01:16:10 np0005539552 irqbalance[786]: IRQ 25 affinity is now unmanaged
Nov 29 01:16:10 np0005539552 irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 01:16:10 np0005539552 irqbalance[786]: IRQ 31 affinity is now unmanaged
Nov 29 01:16:10 np0005539552 irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 01:16:10 np0005539552 irqbalance[786]: IRQ 28 affinity is now unmanaged
Nov 29 01:16:10 np0005539552 irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 01:16:10 np0005539552 irqbalance[786]: IRQ 32 affinity is now unmanaged
Nov 29 01:16:10 np0005539552 irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 01:16:10 np0005539552 irqbalance[786]: IRQ 30 affinity is now unmanaged
Nov 29 01:16:10 np0005539552 irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 01:16:10 np0005539552 irqbalance[786]: IRQ 29 affinity is now unmanaged
Nov 29 01:16:10 np0005539552 dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 01:16:10 np0005539552 dracut[1268]:  microcode_ctl module: mangling fw_dir
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 01:16:10 np0005539552 dracut[1268]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 01:16:10 np0005539552 dracut[1268]: *** Including module: openssl ***
Nov 29 01:16:10 np0005539552 dracut[1268]: *** Including module: shutdown ***
Nov 29 01:16:10 np0005539552 dracut[1268]: *** Including module: squash ***
Nov 29 01:16:11 np0005539552 dracut[1268]: *** Including modules done ***
Nov 29 01:16:11 np0005539552 dracut[1268]: *** Installing kernel module dependencies ***
Nov 29 01:16:11 np0005539552 dracut[1268]: *** Installing kernel module dependencies done ***
Nov 29 01:16:11 np0005539552 dracut[1268]: *** Resolving executable dependencies ***
Nov 29 01:16:13 np0005539552 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:16:13 np0005539552 dracut[1268]: *** Resolving executable dependencies done ***
Nov 29 01:16:13 np0005539552 dracut[1268]: *** Generating early-microcode cpio image ***
Nov 29 01:16:13 np0005539552 dracut[1268]: *** Store current command line parameters ***
Nov 29 01:16:13 np0005539552 dracut[1268]: Stored kernel commandline:
Nov 29 01:16:13 np0005539552 dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Nov 29 01:16:13 np0005539552 dracut[1268]: *** Install squash loader ***
Nov 29 01:16:14 np0005539552 dracut[1268]: *** Squashing the files inside the initramfs ***
Nov 29 01:16:15 np0005539552 dracut[1268]: *** Squashing the files inside the initramfs done ***
Nov 29 01:16:15 np0005539552 dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 01:16:15 np0005539552 dracut[1268]: *** Hardlinking files ***
Nov 29 01:16:15 np0005539552 dracut[1268]: *** Hardlinking files done ***
Nov 29 01:16:16 np0005539552 dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 01:16:17 np0005539552 kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Nov 29 01:16:17 np0005539552 kdumpctl[1014]: kdump: Starting kdump: [OK]
Nov 29 01:16:17 np0005539552 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 01:16:17 np0005539552 systemd[1]: Startup finished in 1.710s (kernel) + 2.779s (initrd) + 18.311s (userspace) = 22.801s.
Nov 29 01:16:27 np0005539552 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 01:16:27 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 01:16:27 np0005539552 systemd-logind[788]: New session 1 of user zuul.
Nov 29 01:16:27 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 01:16:27 np0005539552 systemd[1]: Starting User Manager for UID 1000...
Nov 29 01:16:28 np0005539552 systemd[4300]: Queued start job for default target Main User Target.
Nov 29 01:16:28 np0005539552 systemd[4300]: Created slice User Application Slice.
Nov 29 01:16:28 np0005539552 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:16:28 np0005539552 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:16:28 np0005539552 systemd[4300]: Reached target Paths.
Nov 29 01:16:28 np0005539552 systemd[4300]: Reached target Timers.
Nov 29 01:16:28 np0005539552 systemd[4300]: Starting D-Bus User Message Bus Socket...
Nov 29 01:16:28 np0005539552 systemd[4300]: Starting Create User's Volatile Files and Directories...
Nov 29 01:16:28 np0005539552 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:16:28 np0005539552 systemd[4300]: Reached target Sockets.
Nov 29 01:16:28 np0005539552 systemd[4300]: Finished Create User's Volatile Files and Directories.
Nov 29 01:16:28 np0005539552 systemd[4300]: Reached target Basic System.
Nov 29 01:16:28 np0005539552 systemd[4300]: Reached target Main User Target.
Nov 29 01:16:28 np0005539552 systemd[4300]: Startup finished in 152ms.
Nov 29 01:16:28 np0005539552 systemd[1]: Started User Manager for UID 1000.
Nov 29 01:16:28 np0005539552 systemd[1]: Started Session 1 of User zuul.
Nov 29 01:16:28 np0005539552 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:31 np0005539552 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:16:34 np0005539552 python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:41 np0005539552 python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:42 np0005539552 python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 01:16:44 np0005539552 python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCudHL3tHiIrGUdr3CZx/jgOAB+sTyj6z0B6SJoPEhADY63ZPdbzjdQhpgyhpnTdwlh6+Z4xPQ+DxOd+FPfH9ETfjTAZyODPGBr+U3/aWYFrr1YsSqkwWe+DI0V25XzOJIl8WeH42Z3m8/2jQ1VE7oAtX0LFpiSM33D5G6jGs1zirRd3I823HIkLEWTOQev3Et6zuPF/J/lkKUMsa94htQ/yvthrhpk7+QsWEk8T5uet2LZvnIsjZFIgfCCgTeGtE4eqcC9tdVxfYIwVhUeu3eCkwwBkVi0t0HhAh3qbiXsTIErO5yg2fPPye0mC6UjHMgSqc5crO5b4VU6uuoKLqXHXfoyrjf1PG1bb3S1A7UO9fs+mG8UJ2N53kHSyQ5YcQ+hZyyXqVeKPIQFPvwTxYMEb+rxzq5f56DdR8ruRmocVTqpGu1VTEdGIWkU8IaEB9kOEu7t8oFgiym6LUXBbbd9a6AkVCauAPe7Kq0Q4VHZVxtWSFjTAEi5x3CFG2qzn0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:45 np0005539552 python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:45 np0005539552 python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:46 np0005539552 python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397005.399726-254-205028133804069/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=e891a9c3f1e64b45bc756f48ef3ae3aa_id_rsa follow=False checksum=03e35f16cd901f940500378f2e2f2ebf2de0be9d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:46 np0005539552 python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:47 np0005539552 python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397006.3583586-309-25833926056191/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=e891a9c3f1e64b45bc756f48ef3ae3aa_id_rsa.pub follow=False checksum=b1e1e3a6e20142e56b32029e5e58b508f3db73ab backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:48 np0005539552 python3[4973]: ansible-ping Invoked with data=pong
Nov 29 01:16:49 np0005539552 python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:52 np0005539552 python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 01:16:53 np0005539552 python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:53 np0005539552 python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:53 np0005539552 python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539552 python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539552 python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539552 python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:56 np0005539552 python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:57 np0005539552 python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:57 np0005539552 python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397016.5833857-34-132995692301290/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:58 np0005539552 python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:58 np0005539552 python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:58 np0005539552 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539552 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539552 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539552 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539552 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539552 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539552 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539552 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539552 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539552 python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539552 python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539552 python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539552 python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539552 python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539552 python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539552 python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539552 python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539552 python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539552 python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539552 python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539552 python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539552 python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539552 python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539552 python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:08 np0005539552 python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:17:08 np0005539552 systemd[1]: Starting Time & Date Service...
Nov 29 01:17:08 np0005539552 systemd[1]: Started Time & Date Service.
Nov 29 01:17:08 np0005539552 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 29 01:17:08 np0005539552 python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:09 np0005539552 python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:09 np0005539552 python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397028.9098053-254-207422843110549/source _original_basename=tmpn84m178e follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:10 np0005539552 python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:10 np0005539552 python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397029.807139-304-77625723105715/source _original_basename=tmp1nr4bz_7 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:11 np0005539552 python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:11 np0005539552 python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397030.8766518-384-118503020658127/source _original_basename=tmpu0kgqamp follow=False checksum=b9ea63fb38f50d3257ec076159ca59d9b4b7fe2c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:12 np0005539552 python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:12 np0005539552 python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:13 np0005539552 python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:13 np0005539552 python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397032.6503515-454-184921847610474/source _original_basename=tmp5_3gx74q follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:14 np0005539552 python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2a81-8810-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:14 np0005539552 python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2a81-8810-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 01:17:16 np0005539552 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:38 np0005539552 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:18:03 np0005539552 python3[6944]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:33 np0005539552 systemd[4300]: Starting Mark boot as successful...
Nov 29 01:18:33 np0005539552 systemd[4300]: Finished Mark boot as successful.
Nov 29 01:19:03 np0005539552 systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 01:19:35 np0005539552 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 01:19:35 np0005539552 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7077] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:19:35 np0005539552 systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7339] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7366] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7368] device (eth1): carrier: link connected
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7370] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7374] policy: auto-activating connection 'Wired connection 1' (ce98208b-4e6d-3203-a620-f2de9eba9ec1)
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7377] device (eth1): Activation: starting connection 'Wired connection 1' (ce98208b-4e6d-3203-a620-f2de9eba9ec1)
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7378] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7379] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7382] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:19:35 np0005539552 NetworkManager[857]: <info>  [1764397175.7385] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:36 np0005539552 systemd-logind[788]: New session 3 of user zuul.
Nov 29 01:19:36 np0005539552 systemd[1]: Started Session 3 of User zuul.
Nov 29 01:19:36 np0005539552 python3[6977]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-37f5-2ec2-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:47 np0005539552 python3[7057]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:47 np0005539552 python3[7130]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397186.7535343-206-182525031182545/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e0efae4bf62fb21ad442f6f2ad0090e49ed5bbf5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:48 np0005539552 python3[7180]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:19:48 np0005539552 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 01:19:48 np0005539552 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 01:19:48 np0005539552 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 01:19:48 np0005539552 systemd[1]: Stopping Network Manager...
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1072] caught SIGTERM, shutting down normally.
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1088] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1090] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1091] dhcp4 (eth0): state changed no lease
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1093] manager: NetworkManager state is now CONNECTING
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1247] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1248] dhcp4 (eth1): state changed no lease
Nov 29 01:19:48 np0005539552 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:19:48 np0005539552 NetworkManager[857]: <info>  [1764397188.1322] exiting (success)
Nov 29 01:19:48 np0005539552 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:19:48 np0005539552 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 01:19:48 np0005539552 systemd[1]: Stopped Network Manager.
Nov 29 01:19:48 np0005539552 systemd[1]: NetworkManager.service: Consumed 1.842s CPU time, 9.9M memory peak.
Nov 29 01:19:48 np0005539552 systemd[1]: Starting Network Manager...
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.1949] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0f1c0a59-83fa-405b-b772-82c2e4852e7b)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.1950] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.2013] manager[0x561d56c2f070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:19:48 np0005539552 systemd[1]: Starting Hostname Service...
Nov 29 01:19:48 np0005539552 systemd[1]: Started Hostname Service.
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3071] hostname: hostname: using hostnamed
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3072] hostname: static hostname changed from (none) to "np0005539552.novalocal"
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3081] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3089] manager[0x561d56c2f070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3089] manager[0x561d56c2f070]: rfkill: WWAN hardware radio set enabled
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3147] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3148] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3149] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3150] manager: Networking is enabled by state file
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3154] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3162] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3210] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3229] dhcp: init: Using DHCP client 'internal'
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3234] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3243] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3257] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3275] device (lo): Activation: starting connection 'lo' (62c00a15-19b2-47c5-a13c-15bc1da7e4d2)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3290] device (eth0): carrier: link connected
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3301] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3312] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3313] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3328] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3344] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3358] device (eth1): carrier: link connected
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3367] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3378] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ce98208b-4e6d-3203-a620-f2de9eba9ec1) (indicated)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3379] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3392] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3408] device (eth1): Activation: starting connection 'Wired connection 1' (ce98208b-4e6d-3203-a620-f2de9eba9ec1)
Nov 29 01:19:48 np0005539552 systemd[1]: Started Network Manager.
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3427] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3437] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3443] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3448] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3453] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3461] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3467] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3473] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3479] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3493] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3498] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3511] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3516] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3543] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3551] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3561] device (lo): Activation: successful, device activated.
Nov 29 01:19:48 np0005539552 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3575] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3590] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3704] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3729] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3732] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3737] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3742] device (eth0): Activation: successful, device activated.
Nov 29 01:19:48 np0005539552 NetworkManager[7192]: <info>  [1764397188.3747] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:19:48 np0005539552 python3[7264]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-37f5-2ec2-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:58 np0005539552 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:20:18 np0005539552 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2487] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:20:33 np0005539552 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:20:33 np0005539552 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2752] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2755] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2766] device (eth1): Activation: successful, device activated.
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2775] manager: startup complete
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2777] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <warn>  [1764397233.2785] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2796] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2967] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2967] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2968] dhcp4 (eth1): state changed no lease
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2982] policy: auto-activating connection 'ci-private-network' (9a5d5aa1-edde-56cb-a5da-0684f967617f)
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2987] device (eth1): Activation: starting connection 'ci-private-network' (9a5d5aa1-edde-56cb-a5da-0684f967617f)
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2988] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2991] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.2998] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.3006] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.4244] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.4248] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:20:33 np0005539552 NetworkManager[7192]: <info>  [1764397233.4261] device (eth1): Activation: successful, device activated.
Nov 29 01:20:43 np0005539552 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:20:48 np0005539552 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 01:20:48 np0005539552 systemd[1]: session-3.scope: Consumed 1.780s CPU time.
Nov 29 01:20:48 np0005539552 systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Nov 29 01:20:48 np0005539552 systemd-logind[788]: Removed session 3.
Nov 29 01:21:07 np0005539552 systemd-logind[788]: New session 4 of user zuul.
Nov 29 01:21:08 np0005539552 systemd[1]: Started Session 4 of User zuul.
Nov 29 01:21:08 np0005539552 python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:21:08 np0005539552 python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397268.1058667-373-137942230943961/source _original_basename=tmpw72rc41z follow=False checksum=8b8510474e05b0641525b6d1881e4eebcca5fe7b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:21:11 np0005539552 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 01:21:11 np0005539552 systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Nov 29 01:21:11 np0005539552 systemd-logind[788]: Removed session 4.
Nov 29 01:21:33 np0005539552 systemd[4300]: Created slice User Background Tasks Slice.
Nov 29 01:21:33 np0005539552 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 01:21:33 np0005539552 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 01:27:16 np0005539552 systemd-logind[788]: New session 5 of user zuul.
Nov 29 01:27:16 np0005539552 systemd[1]: Started Session 5 of User zuul.
Nov 29 01:27:16 np0005539552 python3[7508]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dbf6-85dc-000000000ca8-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:17 np0005539552 python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:18 np0005539552 python3[7563]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:18 np0005539552 python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:18 np0005539552 python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:19 np0005539552 python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:20 np0005539552 python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:27:20 np0005539552 python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397640.0873551-368-36410787873186/source _original_basename=tmp_7l0m65x follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:21 np0005539552 python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:27:21 np0005539552 systemd[1]: Reloading.
Nov 29 01:27:21 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:27:23 np0005539552 python3[7898]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 01:27:23 np0005539552 python3[7924]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539552 python3[7952]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539552 python3[7980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539552 python3[8008]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:25 np0005539552 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dbf6-85dc-000000000caf-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:25 np0005539552 python3[8065]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:27:28 np0005539552 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 01:27:28 np0005539552 systemd[1]: session-5.scope: Consumed 4.345s CPU time.
Nov 29 01:27:28 np0005539552 systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Nov 29 01:27:28 np0005539552 systemd-logind[788]: Removed session 5.
Nov 29 01:27:30 np0005539552 systemd-logind[788]: New session 6 of user zuul.
Nov 29 01:27:30 np0005539552 systemd[1]: Started Session 6 of User zuul.
Nov 29 01:27:30 np0005539552 python3[8100]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:28:03 np0005539552 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:03 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:12 np0005539552 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:12 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:22 np0005539552 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:22 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:24 np0005539552 setsebool[8169]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 01:28:24 np0005539552 setsebool[8169]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 01:28:35 np0005539552 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:35 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:57 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:28:57 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:28:57 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:28:57 np0005539552 systemd[1]: Reloading.
Nov 29 01:28:57 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:28:57 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:29:20 np0005539552 irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 01:29:20 np0005539552 irqbalance[786]: IRQ 27 affinity is now unmanaged
Nov 29 01:29:53 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:29:53 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:29:53 np0005539552 systemd[1]: man-db-cache-update.service: Consumed 1min 1.273s CPU time.
Nov 29 01:29:53 np0005539552 systemd[1]: run-r52a70679e2b64e70affa74dc6b39729f.service: Deactivated successfully.
Nov 29 01:29:58 np0005539552 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 01:29:58 np0005539552 systemd[1]: session-6.scope: Consumed 59.791s CPU time.
Nov 29 01:29:58 np0005539552 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Nov 29 01:29:58 np0005539552 systemd-logind[788]: Removed session 6.
Nov 29 01:30:32 np0005539552 systemd-logind[788]: New session 7 of user zuul.
Nov 29 01:30:32 np0005539552 systemd[1]: Started Session 7 of User zuul.
Nov 29 01:30:32 np0005539552 python3[29489]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-fce3-968d-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:30:34 np0005539552 kernel: evm: overlay not supported
Nov 29 01:30:35 np0005539552 systemd[4300]: Starting D-Bus User Message Bus...
Nov 29 01:30:35 np0005539552 dbus-broker-launch[29543]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 01:30:35 np0005539552 dbus-broker-launch[29543]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 01:30:35 np0005539552 systemd[4300]: Started D-Bus User Message Bus.
Nov 29 01:30:35 np0005539552 dbus-broker-lau[29543]: Ready
Nov 29 01:30:35 np0005539552 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:30:35 np0005539552 systemd[4300]: Created slice Slice /user.
Nov 29 01:30:35 np0005539552 systemd[4300]: podman-29526.scope: unit configures an IP firewall, but not running as root.
Nov 29 01:30:35 np0005539552 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 01:30:35 np0005539552 systemd[4300]: Started podman-29526.scope.
Nov 29 01:30:36 np0005539552 systemd[4300]: Started podman-pause-a180178f.scope.
Nov 29 01:30:36 np0005539552 python3[29573]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:36 np0005539552 python3[29573]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 01:30:37 np0005539552 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 01:30:37 np0005539552 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Nov 29 01:30:37 np0005539552 systemd-logind[788]: Removed session 7.
Nov 29 01:31:02 np0005539552 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 01:31:02 np0005539552 systemd-logind[788]: New session 8 of user zuul.
Nov 29 01:31:02 np0005539552 systemd[1]: Started Session 8 of User zuul.
Nov 29 01:31:02 np0005539552 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 01:31:02 np0005539552 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 01:31:02 np0005539552 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 01:31:02 np0005539552 python3[29614]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:03 np0005539552 python3[29640]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:04 np0005539552 python3[29666]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539552.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 01:31:04 np0005539552 python3[29700]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:05 np0005539552 python3[29778]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:31:05 np0005539552 python3[29851]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397864.7782772-170-74017389789153/source _original_basename=tmpylm6d0h3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:06 np0005539552 python3[29901]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Nov 29 01:31:06 np0005539552 systemd[1]: Starting Hostname Service...
Nov 29 01:31:06 np0005539552 systemd[1]: Started Hostname Service.
Nov 29 01:31:06 np0005539552 systemd-hostnamed[29905]: Changed pretty hostname to 'compute-2'
Nov 29 01:31:06 np0005539552 systemd-hostnamed[29905]: Hostname set to <compute-2> (static)
Nov 29 01:31:06 np0005539552 NetworkManager[7192]: <info>  [1764397866.6736] hostname: static hostname changed from "np0005539552.novalocal" to "compute-2"
Nov 29 01:31:06 np0005539552 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:31:06 np0005539552 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:31:07 np0005539552 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Nov 29 01:31:07 np0005539552 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 01:31:07 np0005539552 systemd[1]: session-8.scope: Consumed 2.509s CPU time.
Nov 29 01:31:07 np0005539552 systemd-logind[788]: Removed session 8.
Nov 29 01:31:16 np0005539552 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:31:36 np0005539552 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:33:33 np0005539552 systemd[1]: Starting dnf makecache...
Nov 29 01:33:34 np0005539552 dnf[29925]: Failed determining last makecache time.
Nov 29 01:33:34 np0005539552 dnf[29925]: CentOS Stream 9 - BaseOS                         47 kB/s | 7.3 kB     00:00
Nov 29 01:33:34 np0005539552 dnf[29925]: CentOS Stream 9 - AppStream                      81 kB/s | 7.4 kB     00:00
Nov 29 01:33:35 np0005539552 dnf[29925]: CentOS Stream 9 - CRB                            26 kB/s | 7.2 kB     00:00
Nov 29 01:33:35 np0005539552 dnf[29925]: CentOS Stream 9 - Extras packages                78 kB/s | 8.3 kB     00:00
Nov 29 01:33:35 np0005539552 dnf[29925]: Metadata cache created.
Nov 29 01:33:35 np0005539552 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 01:33:35 np0005539552 systemd[1]: Finished dnf makecache.
Nov 29 01:36:12 np0005539552 systemd-logind[788]: New session 9 of user zuul.
Nov 29 01:36:12 np0005539552 systemd[1]: Started Session 9 of User zuul.
Nov 29 01:36:13 np0005539552 python3[30007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:36:15 np0005539552 python3[30123]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:16 np0005539552 python3[30196]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:16 np0005539552 python3[30222]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:17 np0005539552 python3[30295]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:17 np0005539552 python3[30321]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:18 np0005539552 python3[30394]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:18 np0005539552 python3[30420]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:18 np0005539552 python3[30493]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:18 np0005539552 python3[30519]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:19 np0005539552 python3[30592]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:19 np0005539552 python3[30618]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:19 np0005539552 python3[30691]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:20 np0005539552 python3[30717]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:20 np0005539552 python3[30790]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.607157-34059-51397903472954/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:33 np0005539552 python3[30838]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:32 np0005539552 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 01:41:32 np0005539552 systemd[1]: session-9.scope: Consumed 5.386s CPU time.
Nov 29 01:41:32 np0005539552 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Nov 29 01:41:32 np0005539552 systemd-logind[788]: Removed session 9.
Nov 29 02:00:15 np0005539552 systemd-logind[788]: New session 10 of user zuul.
Nov 29 02:00:15 np0005539552 systemd[1]: Started Session 10 of User zuul.
Nov 29 02:00:16 np0005539552 python3.9[31022]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:17 np0005539552 python3.9[31203]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:00:32 np0005539552 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Nov 29 02:00:32 np0005539552 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 02:00:32 np0005539552 systemd[1]: session-10.scope: Consumed 7.885s CPU time.
Nov 29 02:00:32 np0005539552 systemd-logind[788]: Removed session 10.
Nov 29 02:00:48 np0005539552 systemd-logind[788]: New session 11 of user zuul.
Nov 29 02:00:48 np0005539552 systemd[1]: Started Session 11 of User zuul.
Nov 29 02:00:48 np0005539552 python3.9[31413]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:00:50 np0005539552 python3.9[31587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:51 np0005539552 python3.9[31739]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:00:52 np0005539552 python3.9[31892]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:00:53 np0005539552 python3.9[32044]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:00:54 np0005539552 python3.9[32196]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:00:54 np0005539552 python3.9[32319]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399653.4481719-185-48688109705691/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:00:55 np0005539552 python3.9[32471]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:56 np0005539552 python3.9[32627]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:00:57 np0005539552 python3.9[32779]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:00:58 np0005539552 python3.9[32929]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:01:02 np0005539552 python3.9[33197]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:03 np0005539552 python3.9[33347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:01:04 np0005539552 python3.9[33501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:01:06 np0005539552 python3.9[33659]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:01:07 np0005539552 python3.9[33743]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:01:48 np0005539552 systemd[1]: Reloading.
Nov 29 02:01:48 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:01:48 np0005539552 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 02:01:49 np0005539552 systemd[1]: Reloading.
Nov 29 02:01:49 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:01:49 np0005539552 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 02:01:49 np0005539552 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 02:01:49 np0005539552 systemd[1]: Reloading.
Nov 29 02:01:49 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:01:49 np0005539552 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 02:01:50 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:01:50 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:01:50 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:02:54 np0005539552 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:02:54 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:02:54 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 02:02:54 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:02:54 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:02:54 np0005539552 systemd[1]: Reloading.
Nov 29 02:02:54 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:54 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:02:55 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:02:55 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:02:55 np0005539552 systemd[1]: man-db-cache-update.service: Consumed 1.371s CPU time.
Nov 29 02:02:55 np0005539552 systemd[1]: run-rcc06b24458ab45cdabd474717f026193.service: Deactivated successfully.
Nov 29 02:04:09 np0005539552 python3.9[35270]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:11 np0005539552 python3.9[35551]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:04:12 np0005539552 python3.9[35703]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:04:16 np0005539552 python3.9[35856]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:16 np0005539552 python3.9[36008]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:04:18 np0005539552 python3.9[36160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:22 np0005539552 python3.9[36312]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:24 np0005539552 python3.9[36437]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399858.7845752-674-553302432629/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:32 np0005539552 python3.9[36591]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:33 np0005539552 python3.9[36743]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:34 np0005539552 python3.9[36896]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:35 np0005539552 python3.9[37048]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:04:35 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:04:35 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:04:36 np0005539552 python3.9[37202]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:04:39 np0005539552 python3.9[37360]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:04:41 np0005539552 python3.9[37520]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:04:42 np0005539552 python3.9[37673]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:04:44 np0005539552 python3.9[37831]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:04:45 np0005539552 python3.9[37983]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:50 np0005539552 python3.9[38136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:51 np0005539552 python3.9[38288]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:51 np0005539552 python3.9[38411]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399890.6051798-1030-135768092514005/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:52 np0005539552 python3.9[38563]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:04:52 np0005539552 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:04:53 np0005539552 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 02:04:53 np0005539552 kernel: Bridge firewalling registered
Nov 29 02:04:53 np0005539552 systemd-modules-load[38567]: Inserted module 'br_netfilter'
Nov 29 02:04:53 np0005539552 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:04:53 np0005539552 python3.9[38723]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:54 np0005539552 python3.9[38846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399893.3607984-1101-202832872134399/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:55 np0005539552 python3.9[38998]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:58 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:04:58 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:04:59 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:04:59 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:04:59 np0005539552 systemd[1]: Reloading.
Nov 29 02:04:59 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:04:59 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:05:03 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:05:03 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:05:03 np0005539552 systemd[1]: man-db-cache-update.service: Consumed 5.330s CPU time.
Nov 29 02:05:03 np0005539552 systemd[1]: run-rcfae51dd9bc44a3bafbb04177c1b5583.service: Deactivated successfully.
Nov 29 02:05:05 np0005539552 python3.9[42735]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:05:05 np0005539552 python3.9[42887]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:05:06 np0005539552 python3.9[43037]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:05:07 np0005539552 python3.9[43189]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:07 np0005539552 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:05:08 np0005539552 systemd[1]: Starting Authorization Manager...
Nov 29 02:05:08 np0005539552 polkitd[43406]: Started polkitd version 0.117
Nov 29 02:05:08 np0005539552 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:05:08 np0005539552 systemd[1]: Started Authorization Manager.
Nov 29 02:05:09 np0005539552 python3.9[43577]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:09 np0005539552 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:05:10 np0005539552 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:05:10 np0005539552 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:05:10 np0005539552 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:05:10 np0005539552 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:05:11 np0005539552 python3.9[43738]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:05:14 np0005539552 python3.9[43890]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:14 np0005539552 systemd[1]: Reloading.
Nov 29 02:05:15 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:16 np0005539552 python3.9[44079]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:17 np0005539552 systemd[1]: Reloading.
Nov 29 02:05:17 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:18 np0005539552 python3.9[44268]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:19 np0005539552 python3.9[44421]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:19 np0005539552 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 02:05:20 np0005539552 python3.9[44574]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:22 np0005539552 python3.9[44736]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:23 np0005539552 python3.9[44889]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:05:23 np0005539552 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 02:05:23 np0005539552 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 02:05:23 np0005539552 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 02:05:23 np0005539552 systemd[1]: Starting Apply Kernel Variables...
Nov 29 02:05:23 np0005539552 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 02:05:23 np0005539552 systemd[1]: Finished Apply Kernel Variables.
Nov 29 02:05:24 np0005539552 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 02:05:24 np0005539552 systemd[1]: session-11.scope: Consumed 2min 16.202s CPU time.
Nov 29 02:05:24 np0005539552 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Nov 29 02:05:24 np0005539552 systemd-logind[788]: Removed session 11.
Nov 29 02:05:29 np0005539552 systemd-logind[788]: New session 12 of user zuul.
Nov 29 02:05:29 np0005539552 systemd[1]: Started Session 12 of User zuul.
Nov 29 02:05:30 np0005539552 python3.9[45072]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:05:33 np0005539552 python3.9[45228]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:05:34 np0005539552 python3.9[45381]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:05:35 np0005539552 python3.9[45539]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:05:37 np0005539552 python3.9[45699]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:38 np0005539552 python3.9[45783]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:05:42 np0005539552 python3.9[45947]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:05:54 np0005539552 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:05:54 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:05:54 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 02:05:54 np0005539552 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 02:05:57 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:05:57 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:05:57 np0005539552 systemd[1]: Reloading.
Nov 29 02:05:57 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:57 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:57 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:05:59 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:05:59 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:05:59 np0005539552 systemd[1]: run-r77592f8c8ed34749a026ae85e1644be6.service: Deactivated successfully.
Nov 29 02:06:18 np0005539552 python3.9[47048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:06:18 np0005539552 systemd[1]: Reloading.
Nov 29 02:06:18 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:18 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:18 np0005539552 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 02:06:18 np0005539552 chown[47091]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 02:06:18 np0005539552 ovs-ctl[47097]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 02:06:18 np0005539552 ovs-ctl[47097]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 02:06:18 np0005539552 ovs-ctl[47097]: Starting ovsdb-server [  OK  ]
Nov 29 02:06:18 np0005539552 ovs-vsctl[47146]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 02:06:19 np0005539552 ovs-vsctl[47162]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"479f969f-dbf7-4938-8979-b8532eb113f6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 02:06:19 np0005539552 ovs-ctl[47097]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 02:06:19 np0005539552 ovs-vsctl[47172]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Nov 29 02:06:19 np0005539552 ovs-ctl[47097]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:06:19 np0005539552 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 02:06:19 np0005539552 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 02:06:19 np0005539552 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 02:06:19 np0005539552 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 02:06:19 np0005539552 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 02:06:19 np0005539552 ovs-ctl[47217]: Inserting openvswitch module [  OK  ]
Nov 29 02:06:19 np0005539552 ovs-ctl[47186]: Starting ovs-vswitchd [  OK  ]
Nov 29 02:06:19 np0005539552 ovs-ctl[47186]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:06:19 np0005539552 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 02:06:19 np0005539552 ovs-vsctl[47235]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Nov 29 02:06:19 np0005539552 systemd[1]: Starting Open vSwitch...
Nov 29 02:06:19 np0005539552 systemd[1]: Finished Open vSwitch.
Nov 29 02:06:21 np0005539552 python3.9[47386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:22 np0005539552 python3.9[47538]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:06:23 np0005539552 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:06:23 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:06:26 np0005539552 python3.9[47693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:27 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 02:06:27 np0005539552 python3.9[47851]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:30 np0005539552 python3.9[48004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:31 np0005539552 python3.9[48291]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:06:32 np0005539552 python3.9[48441]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:33 np0005539552 python3.9[48595]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:35 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:35 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:35 np0005539552 systemd[1]: Reloading.
Nov 29 02:06:35 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:35 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:35 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:35 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:35 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:35 np0005539552 systemd[1]: run-r597df5fb3a95435888ff66d735a9493a.service: Deactivated successfully.
Nov 29 02:06:41 np0005539552 python3.9[48914]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:41 np0005539552 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 02:06:41 np0005539552 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 02:06:41 np0005539552 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 02:06:41 np0005539552 systemd[1]: Stopping Network Manager...
Nov 29 02:06:41 np0005539552 NetworkManager[7192]: <info>  [1764400001.0988] caught SIGTERM, shutting down normally.
Nov 29 02:06:41 np0005539552 NetworkManager[7192]: <info>  [1764400001.1007] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:06:41 np0005539552 NetworkManager[7192]: <info>  [1764400001.1008] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:06:41 np0005539552 NetworkManager[7192]: <info>  [1764400001.1008] dhcp4 (eth0): state changed no lease
Nov 29 02:06:41 np0005539552 NetworkManager[7192]: <info>  [1764400001.1011] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:06:41 np0005539552 NetworkManager[7192]: <info>  [1764400001.1103] exiting (success)
Nov 29 02:06:41 np0005539552 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:06:41 np0005539552 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:06:41 np0005539552 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 02:06:41 np0005539552 systemd[1]: Stopped Network Manager.
Nov 29 02:06:41 np0005539552 systemd[1]: NetworkManager.service: Consumed 19.710s CPU time, 4.1M memory peak, read 0B from disk, written 28.0K to disk.
Nov 29 02:06:41 np0005539552 systemd[1]: Starting Network Manager...
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2013] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0f1c0a59-83fa-405b-b772-82c2e4852e7b)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2017] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2087] manager[0x557152fce090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 02:06:41 np0005539552 systemd[1]: Starting Hostname Service...
Nov 29 02:06:41 np0005539552 systemd[1]: Started Hostname Service.
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2942] hostname: hostname: using hostnamed
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2944] hostname: static hostname changed from (none) to "compute-2"
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2950] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2955] manager[0x557152fce090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2955] manager[0x557152fce090]: rfkill: WWAN hardware radio set enabled
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2976] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2985] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2986] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2987] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2987] manager: Networking is enabled by state file
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2989] settings: Loaded settings plugin: keyfile (internal)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.2992] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3013] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3023] dhcp: init: Using DHCP client 'internal'
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3025] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3030] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3034] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3040] device (lo): Activation: starting connection 'lo' (62c00a15-19b2-47c5-a13c-15bc1da7e4d2)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3046] device (eth0): carrier: link connected
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3049] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3053] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3053] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3058] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3063] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3068] device (eth1): carrier: link connected
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3072] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3076] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (9a5d5aa1-edde-56cb-a5da-0684f967617f) (indicated)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3076] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3081] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3086] device (eth1): Activation: starting connection 'ci-private-network' (9a5d5aa1-edde-56cb-a5da-0684f967617f)
Nov 29 02:06:41 np0005539552 systemd[1]: Started Network Manager.
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3093] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3102] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3105] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3106] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3108] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3110] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3112] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3115] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3118] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3130] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3133] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3142] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3153] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3161] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3164] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3169] device (lo): Activation: successful, device activated.
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3176] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3179] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3182] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 02:06:41 np0005539552 NetworkManager[48926]: <info>  [1764400001.3184] device (eth1): Activation: successful, device activated.
Nov 29 02:06:41 np0005539552 systemd[1]: Starting Network Manager Wait Online...
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9478] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9491] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9563] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9602] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9604] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9608] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9611] device (eth0): Activation: successful, device activated.
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9616] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 02:06:42 np0005539552 NetworkManager[48926]: <info>  [1764400002.9618] manager: startup complete
Nov 29 02:06:42 np0005539552 systemd[1]: Finished Network Manager Wait Online.
Nov 29 02:06:43 np0005539552 python3.9[49127]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:50 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:50 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:50 np0005539552 systemd[1]: Reloading.
Nov 29 02:06:50 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:50 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:50 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:51 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:51 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:51 np0005539552 systemd[1]: run-r3a5a0290fa004be1b74416041a3db0ac.service: Deactivated successfully.
Nov 29 02:06:53 np0005539552 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:07:09 np0005539552 python3.9[49598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:11 np0005539552 python3.9[49750]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:11 np0005539552 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 02:07:12 np0005539552 python3.9[49906]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:13 np0005539552 python3.9[50058]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:14 np0005539552 python3.9[50210]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:14 np0005539552 python3.9[50362]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:15 np0005539552 python3.9[50514]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:16 np0005539552 python3.9[50637]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400035.1170754-654-208199072085295/.source _original_basename=.z09wrivv follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:17 np0005539552 python3.9[50789]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:18 np0005539552 python3.9[50941]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 02:07:19 np0005539552 python3.9[51093]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:22 np0005539552 python3.9[51520]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 02:07:23 np0005539552 ansible-async_wrapper.py[51695]: Invoked with j952716288060 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400042.7928412-852-126266139232057/AnsiballZ_edpm_os_net_config.py _
Nov 29 02:07:23 np0005539552 ansible-async_wrapper.py[51698]: Starting module and watcher
Nov 29 02:07:23 np0005539552 ansible-async_wrapper.py[51698]: Start watching 51699 (300)
Nov 29 02:07:23 np0005539552 ansible-async_wrapper.py[51699]: Start module (51699)
Nov 29 02:07:23 np0005539552 ansible-async_wrapper.py[51695]: Return async_wrapper task started.
Nov 29 02:07:23 np0005539552 python3.9[51700]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 02:07:24 np0005539552 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 02:07:24 np0005539552 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 02:07:24 np0005539552 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 02:07:24 np0005539552 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 02:07:24 np0005539552 kernel: cfg80211: failed to load regulatory.db
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8320] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8338] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8878] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8879] audit: op="connection-add" uuid="f4889dc2-eba6-4810-8021-92b9454834d3" name="br-ex-br" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8893] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8894] audit: op="connection-add" uuid="35c3ee11-0730-436e-ab66-aa8ce3854c83" name="br-ex-port" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8904] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8906] audit: op="connection-add" uuid="9e3273f2-45d0-4849-8417-1d97183591df" name="eth1-port" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8916] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8917] audit: op="connection-add" uuid="f4bd9152-7da3-4629-afa1-ec311a88edb2" name="vlan20-port" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8927] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8929] audit: op="connection-add" uuid="b2fe2d0d-a5b3-4b7f-9016-925e59f7d4fc" name="vlan21-port" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8938] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8940] audit: op="connection-add" uuid="2c311b74-08e7-439d-b857-91eb9cef09a0" name="vlan22-port" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8950] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8951] audit: op="connection-add" uuid="c2a3ef06-a2cb-42ba-a007-a0807e6568d9" name="vlan23-port" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8968] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8983] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.8984] audit: op="connection-add" uuid="e8451649-62d3-4f2a-8d41-3689e84ca1ca" name="br-ex-if" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9035] audit: op="connection-update" uuid="9a5d5aa1-edde-56cb-a5da-0684f967617f" name="ci-private-network" args="ovs-interface.type,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.addresses,ovs-external-ids.data,connection.controller,connection.timestamp,connection.master,connection.port-type,connection.slave-type,ipv4.routes,ipv4.dns,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.addresses" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9051] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9053] audit: op="connection-add" uuid="67665af3-9c83-4c61-a651-b8bbf1b65df1" name="vlan20-if" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9067] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9068] audit: op="connection-add" uuid="b7e48131-6a79-4788-a11d-99b998f621de" name="vlan21-if" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9082] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9084] audit: op="connection-add" uuid="c02513b7-0001-443d-95b3-13da3ad97095" name="vlan22-if" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9099] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9100] audit: op="connection-add" uuid="2ede0af0-f4ae-438a-bd3e-64829b48d981" name="vlan23-if" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9112] audit: op="connection-delete" uuid="ce98208b-4e6d-3203-a620-f2de9eba9ec1" name="Wired connection 1" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9122] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9132] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9135] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f4889dc2-eba6-4810-8021-92b9454834d3)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9135] audit: op="connection-activate" uuid="f4889dc2-eba6-4810-8021-92b9454834d3" name="br-ex-br" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9137] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9144] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9147] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (35c3ee11-0730-436e-ab66-aa8ce3854c83)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9149] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9154] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9158] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9e3273f2-45d0-4849-8417-1d97183591df)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9159] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9165] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9169] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f4bd9152-7da3-4629-afa1-ec311a88edb2)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9170] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9176] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9180] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b2fe2d0d-a5b3-4b7f-9016-925e59f7d4fc)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9181] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9187] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9191] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2c311b74-08e7-439d-b857-91eb9cef09a0)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9192] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9199] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9202] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c2a3ef06-a2cb-42ba-a007-a0807e6568d9)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9203] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9205] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9207] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9213] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9218] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9221] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e8451649-62d3-4f2a-8d41-3689e84ca1ca)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9222] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9225] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9227] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9228] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9229] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9238] device (eth1): disconnecting for new activation request.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9239] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9241] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9243] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9253] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9258] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9262] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9266] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (67665af3-9c83-4c61-a651-b8bbf1b65df1)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9267] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9269] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9271] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9272] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9275] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9279] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9284] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (b7e48131-6a79-4788-a11d-99b998f621de)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9284] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9287] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9289] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9290] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9292] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9297] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9301] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c02513b7-0001-443d-95b3-13da3ad97095)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9301] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9303] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9305] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9307] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9309] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9313] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9318] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (2ede0af0-f4ae-438a-bd3e-64829b48d981)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9318] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9321] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9322] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9324] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9325] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9337] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9339] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9341] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9343] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9348] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9351] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9354] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 kernel: ovs-system: entered promiscuous mode
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9378] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9380] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9385] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 kernel: Timeout policy base is empty
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9389] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9392] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9393] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9398] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9402] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9406] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 systemd-udevd[51707]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9408] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9412] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9416] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9419] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9421] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9426] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9430] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9430] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9430] dhcp4 (eth0): state changed no lease
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9431] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9441] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9444] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51701 uid=0 result="fail" reason="Device is not activated"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9484] device (eth1): disconnecting for new activation request.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9485] audit: op="connection-activate" uuid="9a5d5aa1-edde-56cb-a5da-0684f967617f" name="ci-private-network" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9488] dhcp4 (eth0): state changed new lease, address=38.102.83.189
Nov 29 02:07:25 np0005539552 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9533] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9545] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9549] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51701 uid=0 result="success"
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9549] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:25 np0005539552 kernel: br-ex: entered promiscuous mode
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9694] device (eth1): Activation: starting connection 'ci-private-network' (9a5d5aa1-edde-56cb-a5da-0684f967617f)
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9698] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9703] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9710] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9713] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9719] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9723] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9732] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9733] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9734] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9736] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9737] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9739] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9741] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9749] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9756] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9760] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9763] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9767] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9771] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9775] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9779] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9782] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9785] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9789] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9793] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9797] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9801] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9806] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9819] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:25 np0005539552 kernel: vlan22: entered promiscuous mode
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9838] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 systemd-udevd[51706]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9875] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9877] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9878] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9884] device (eth1): Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9888] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9893] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:25 np0005539552 kernel: vlan21: entered promiscuous mode
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9967] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:25 np0005539552 NetworkManager[48926]: <info>  [1764400045.9980] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:25 np0005539552 kernel: vlan20: entered promiscuous mode
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0005] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0007] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0013] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539552 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0057] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0071] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0088] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0089] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 kernel: vlan23: entered promiscuous mode
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0096] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0144] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0153] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0170] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0172] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0180] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0231] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0242] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0264] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0265] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539552 NetworkManager[48926]: <info>  [1764400046.0273] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.1514] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51701 uid=0 result="success"
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.3304] checkpoint[0x557152fa4950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.3307] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51701 uid=0 result="success"
Nov 29 02:07:27 np0005539552 python3.9[52059]: ansible-ansible.legacy.async_status Invoked with jid=j952716288060.51695 mode=status _async_dir=/root/.ansible_async
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.6892] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51701 uid=0 result="success"
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.6910] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51701 uid=0 result="success"
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.9355] audit: op="networking-control" arg="global-dns-configuration" pid=51701 uid=0 result="success"
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.9392] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.9428] audit: op="networking-control" arg="global-dns-configuration" pid=51701 uid=0 result="success"
Nov 29 02:07:27 np0005539552 NetworkManager[48926]: <info>  [1764400047.9454] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51701 uid=0 result="success"
Nov 29 02:07:28 np0005539552 NetworkManager[48926]: <info>  [1764400048.0878] checkpoint[0x557152fa4a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 02:07:28 np0005539552 NetworkManager[48926]: <info>  [1764400048.0884] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51701 uid=0 result="success"
Nov 29 02:07:28 np0005539552 ansible-async_wrapper.py[51699]: Module complete (51699)
Nov 29 02:07:28 np0005539552 ansible-async_wrapper.py[51698]: Done in kid B.
Nov 29 02:07:31 np0005539552 python3.9[52164]: ansible-ansible.legacy.async_status Invoked with jid=j952716288060.51695 mode=status _async_dir=/root/.ansible_async
Nov 29 02:07:31 np0005539552 python3.9[52264]: ansible-ansible.legacy.async_status Invoked with jid=j952716288060.51695 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:07:32 np0005539552 python3.9[52416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:33 np0005539552 python3.9[52541]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400052.179231-933-26870159771264/.source.returncode _original_basename=.nkv5md8s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:34 np0005539552 python3.9[52693]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:34 np0005539552 python3.9[52817]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400053.5849173-981-46256842911754/.source.cfg _original_basename=.bwz7j2dv follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:35 np0005539552 python3.9[52969]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:07:35 np0005539552 systemd[1]: Reloading Network Manager...
Nov 29 02:07:35 np0005539552 NetworkManager[48926]: <info>  [1764400055.7069] audit: op="reload" arg="0" pid=52973 uid=0 result="success"
Nov 29 02:07:35 np0005539552 NetworkManager[48926]: <info>  [1764400055.7076] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 02:07:35 np0005539552 systemd[1]: Reloaded Network Manager.
Nov 29 02:07:36 np0005539552 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 02:07:36 np0005539552 systemd[1]: session-12.scope: Consumed 51.485s CPU time.
Nov 29 02:07:36 np0005539552 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Nov 29 02:07:36 np0005539552 systemd-logind[788]: Removed session 12.
Nov 29 02:07:40 np0005539552 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 02:07:40 np0005539552 irqbalance[786]: IRQ 26 affinity is now unmanaged
Nov 29 02:07:41 np0005539552 systemd-logind[788]: New session 13 of user zuul.
Nov 29 02:07:41 np0005539552 systemd[1]: Started Session 13 of User zuul.
Nov 29 02:07:42 np0005539552 python3.9[53157]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:43 np0005539552 python3.9[53311]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:45 np0005539552 python3.9[53505]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:45 np0005539552 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 02:07:45 np0005539552 systemd[1]: session-13.scope: Consumed 2.200s CPU time.
Nov 29 02:07:45 np0005539552 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Nov 29 02:07:45 np0005539552 systemd-logind[788]: Removed session 13.
Nov 29 02:07:45 np0005539552 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:07:50 np0005539552 systemd-logind[788]: New session 14 of user zuul.
Nov 29 02:07:50 np0005539552 systemd[1]: Started Session 14 of User zuul.
Nov 29 02:07:51 np0005539552 python3.9[53687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:53 np0005539552 python3.9[53841]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:54 np0005539552 python3.9[53998]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:55 np0005539552 python3.9[54082]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:07:57 np0005539552 python3.9[54236]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:59 np0005539552 python3.9[54431]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:00 np0005539552 python3.9[54583]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:08:00 np0005539552 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3253012495-merged.mount: Deactivated successfully.
Nov 29 02:08:00 np0005539552 podman[54584]: 2025-11-29 07:08:00.163147039 +0000 UTC m=+0.062700170 system refresh
Nov 29 02:08:01 np0005539552 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck3092884397-merged.mount: Deactivated successfully.
Nov 29 02:08:01 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:01 np0005539552 python3.9[54747]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:02 np0005539552 python3.9[54870]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400080.8411653-205-196052475591877/.source.json follow=False _original_basename=podman_network_config.j2 checksum=0f63160e41f3c5431f423f8156902762894443f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:02 np0005539552 python3.9[55022]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:03 np0005539552 python3.9[55145]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400082.3657918-249-131453496429823/.source.conf follow=False _original_basename=registries.conf.j2 checksum=197bf6e1388aca01b529f5e8d08286f263a7fb81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:04 np0005539552 python3.9[55297]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:05 np0005539552 python3.9[55449]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:05 np0005539552 python3.9[55601]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:06 np0005539552 python3.9[55753]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:07 np0005539552 python3.9[55905]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:08:10 np0005539552 python3.9[56058]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:08:10 np0005539552 python3.9[56212]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:08:11 np0005539552 python3.9[56364]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:08:12 np0005539552 python3.9[56516]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:08:13 np0005539552 python3.9[56669]: ansible-service_facts Invoked
Nov 29 02:08:13 np0005539552 network[56686]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:08:13 np0005539552 network[56687]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:08:13 np0005539552 network[56688]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:08:19 np0005539552 python3.9[57140]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:08:22 np0005539552 python3.9[57293]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:08:24 np0005539552 python3.9[57445]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:24 np0005539552 python3.9[57570]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400103.6570992-683-46845854645624/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:25 np0005539552 python3.9[57724]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:26 np0005539552 python3.9[57849]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400105.1841145-729-22813238058227/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:28 np0005539552 python3.9[58003]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:29 np0005539552 python3.9[58157]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:08:30 np0005539552 python3.9[58241]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:08:32 np0005539552 python3.9[58395]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:08:33 np0005539552 python3.9[58479]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:08:33 np0005539552 chronyd[793]: chronyd exiting
Nov 29 02:08:33 np0005539552 systemd[1]: Stopping NTP client/server...
Nov 29 02:08:33 np0005539552 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 02:08:33 np0005539552 systemd[1]: Stopped NTP client/server.
Nov 29 02:08:33 np0005539552 systemd[1]: Starting NTP client/server...
Nov 29 02:08:33 np0005539552 chronyd[58488]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 02:08:33 np0005539552 chronyd[58488]: Frequency -26.822 +/- 0.152 ppm read from /var/lib/chrony/drift
Nov 29 02:08:33 np0005539552 chronyd[58488]: Loaded seccomp filter (level 2)
Nov 29 02:08:33 np0005539552 systemd[1]: Started NTP client/server.
Nov 29 02:08:34 np0005539552 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 02:08:34 np0005539552 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Nov 29 02:08:34 np0005539552 systemd[1]: session-14.scope: Consumed 25.960s CPU time.
Nov 29 02:08:34 np0005539552 systemd-logind[788]: Removed session 14.
Nov 29 02:08:40 np0005539552 systemd-logind[788]: New session 15 of user zuul.
Nov 29 02:08:40 np0005539552 systemd[1]: Started Session 15 of User zuul.
Nov 29 02:08:40 np0005539552 python3.9[58669]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:41 np0005539552 python3.9[58821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:42 np0005539552 python3.9[58944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400121.1928918-69-185706129656334/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:43 np0005539552 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 02:08:43 np0005539552 systemd[1]: session-15.scope: Consumed 1.634s CPU time.
Nov 29 02:08:43 np0005539552 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Nov 29 02:08:43 np0005539552 systemd-logind[788]: Removed session 15.
Nov 29 02:08:48 np0005539552 systemd-logind[788]: New session 16 of user zuul.
Nov 29 02:08:48 np0005539552 systemd[1]: Started Session 16 of User zuul.
Nov 29 02:08:49 np0005539552 python3.9[59123]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:08:50 np0005539552 python3.9[59279]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:51 np0005539552 python3.9[59454]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:52 np0005539552 python3.9[59577]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764400131.1699634-90-248207931526849/.source.json _original_basename=.s5397_lk follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:53 np0005539552 python3.9[59729]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:54 np0005539552 python3.9[59852]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400133.012124-160-114029623028266/.source _original_basename=.ioomfhd2 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:55 np0005539552 python3.9[60004]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:55 np0005539552 python3.9[60156]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:56 np0005539552 python3.9[60279]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400135.262648-231-118487214902920/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:57 np0005539552 python3.9[60431]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:57 np0005539552 python3.9[60554]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400136.5110457-231-273736319682138/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:58 np0005539552 python3.9[60706]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:59 np0005539552 python3.9[60858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:59 np0005539552 python3.9[60981]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400138.7526746-343-83254888770957/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:00 np0005539552 python3.9[61133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:01 np0005539552 python3.9[61256]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400140.1549542-388-5484221827731/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:02 np0005539552 python3.9[61408]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:02 np0005539552 systemd[1]: Reloading.
Nov 29 02:09:02 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:02 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:02 np0005539552 systemd[1]: Reloading.
Nov 29 02:09:02 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:02 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:02 np0005539552 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 02:09:02 np0005539552 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 02:09:03 np0005539552 python3.9[61635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:04 np0005539552 python3.9[61760]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400142.9703908-457-189949813161791/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:04 np0005539552 python3.9[61912]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:05 np0005539552 python3.9[62035]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400144.212852-502-101812927826507/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:06 np0005539552 python3.9[62187]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:06 np0005539552 systemd[1]: Reloading.
Nov 29 02:09:06 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:06 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:06 np0005539552 systemd[1]: Reloading.
Nov 29 02:09:06 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:06 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:06 np0005539552 systemd[1]: Starting Create netns directory...
Nov 29 02:09:06 np0005539552 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:09:06 np0005539552 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:09:06 np0005539552 systemd[1]: Finished Create netns directory.
Nov 29 02:09:07 np0005539552 python3.9[62415]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:09:07 np0005539552 network[62432]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:09:07 np0005539552 network[62433]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:09:07 np0005539552 network[62434]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:09:12 np0005539552 python3.9[62696]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:12 np0005539552 systemd[1]: Reloading.
Nov 29 02:09:12 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:12 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:13 np0005539552 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 02:09:13 np0005539552 iptables.init[62736]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 02:09:13 np0005539552 iptables.init[62736]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 02:09:13 np0005539552 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 02:09:13 np0005539552 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 02:09:14 np0005539552 python3.9[62932]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:15 np0005539552 python3.9[63086]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:15 np0005539552 systemd[1]: Reloading.
Nov 29 02:09:15 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:15 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:15 np0005539552 systemd[1]: Starting Netfilter Tables...
Nov 29 02:09:15 np0005539552 systemd[1]: Finished Netfilter Tables.
Nov 29 02:09:16 np0005539552 python3.9[63278]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:18 np0005539552 python3.9[63431]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:18 np0005539552 python3.9[63556]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400157.9921465-709-158000462090097/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:19 np0005539552 python3.9[63709]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:09:19 np0005539552 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 02:09:19 np0005539552 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 02:09:20 np0005539552 python3.9[63865]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:21 np0005539552 python3.9[64017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:21 np0005539552 python3.9[64140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400160.9571514-802-53531347875587/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:23 np0005539552 python3.9[64292]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:09:23 np0005539552 systemd[1]: Starting Time & Date Service...
Nov 29 02:09:23 np0005539552 systemd[1]: Started Time & Date Service.
Nov 29 02:09:24 np0005539552 python3.9[64448]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:24 np0005539552 python3.9[64600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:25 np0005539552 python3.9[64723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400164.271214-907-664587347806/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:25 np0005539552 python3.9[64875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:26 np0005539552 python3.9[64998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400165.51531-952-57340911469158/.source.yaml _original_basename=.fym4i9ee follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:27 np0005539552 python3.9[65150]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:27 np0005539552 python3.9[65273]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400166.7530408-997-7197086635160/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:28 np0005539552 python3.9[65425]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:29 np0005539552 python3.9[65578]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:30 np0005539552 python3[65731]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:09:30 np0005539552 python3.9[65883]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:31 np0005539552 python3.9[66006]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400170.4132605-1114-276719482657415/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:32 np0005539552 python3.9[66158]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:32 np0005539552 python3.9[66281]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400171.811118-1159-158881071775682/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:33 np0005539552 python3.9[66433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:34 np0005539552 python3.9[66556]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400173.150676-1204-152258527380672/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:35 np0005539552 python3.9[66708]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:35 np0005539552 python3.9[66831]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400174.5916433-1249-126041245107826/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:36 np0005539552 python3.9[66983]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:37 np0005539552 python3.9[67106]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400175.9896016-1294-130209828684750/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:37 np0005539552 python3.9[67258]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:38 np0005539552 python3.9[67410]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:39 np0005539552 python3.9[67569]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:40 np0005539552 python3.9[67722]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:41 np0005539552 python3.9[67874]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:42 np0005539552 python3.9[68026]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:09:42 np0005539552 python3.9[68179]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:09:43 np0005539552 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 02:09:43 np0005539552 systemd[1]: session-16.scope: Consumed 34.456s CPU time.
Nov 29 02:09:43 np0005539552 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Nov 29 02:09:43 np0005539552 systemd-logind[788]: Removed session 16.
Nov 29 02:09:52 np0005539552 systemd-logind[788]: New session 17 of user zuul.
Nov 29 02:09:52 np0005539552 systemd[1]: Started Session 17 of User zuul.
Nov 29 02:09:53 np0005539552 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:09:53 np0005539552 python3.9[68362]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:09:54 np0005539552 python3.9[68514]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:09:56 np0005539552 python3.9[68666]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:09:57 np0005539552 python3.9[68818]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DvicBdqy7dEZlHZpy7m/TwUChtVXFipP55AL4//M7HIh4A4ZWW0M0pb4E4WsXc1Y99eeNf5R+fmafWv5Z2x8Tq9KiRM9wQGSEJo1Sp7Ant8TcIyfbWCUIhmGAfkYUT2iUTjyyBrBL7iGVxJbYtCagodoXoIL4MSkgeZpadFa4XI4DieFBF95zOzXF6Z9RVUiocOG6vaogo3k/wTemQxQ/dlVV7SPrtj+GoZEUpeNlAKRbkAB8PNee/Ne+abzClpRp50s2pAh7smZFmL0O+wDOgWwFImPpxCkh4nR/3IJq6O53KXSl9jR4X/vmJHpFEHC6oZX5/hfwaJTfvvELB5cjzaFh3mzFweGkQq82VhAAxVksDTO2+aUZFGDJbMSvjPTSTEl+qx+GAl7E0KnzST+NMnd5qplw0KIj+BBZgkZtKK8kAsxxRU3zDMDotlvIDG1KYN+wOGRG2Cy2afXmGFIFYdzOFlvkAwmv9yhY5u5OlWxzuiZEOcqJ0dGS1e0hk8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFq0l7tgdUK0C+AqSmZJQ8Y9Z17ynv3L7Gso+BnrUJe7#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWT8H4lhVkE+892UU3HiUydE/Wuy5lmeTLAJzcPPkEmKKDZLorB5daY+peHiUZWU/JHax1i6VTJiGCUcfBK9Vw=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfIZbQlJSY8OFW9gaKZpL5AOJYgHeGcUU4xMMLWNL/xUPPZkDRJ+0oOBxm1GBsA8W/sQVZWDc//tIOaPRg0Ts5mepXlfGs0Url+hpuUxGZNLWaIiPfHq1tUx7zM7eWeUlVhlBayXU+bDoHZDE1TezLFLi49CXlrQuy/1Fb5Ju8aYVVJNoRltLwGKo8JrHv8UnYQ29iZPFO7+AEqgSmsEyz9hjMO7qStFsK0Z4RYJrbTZ/AMj8FNebCRWGtc2weikdIjLid5Z20teORSzpJW4jLDvRkyg92/WdI7iFDyHhslm5uNGHqqE2uRPqQFTZ7tdP6IJzfhJms7WfRdsOS7qJdAeOLzhn/EcmLaKoST1KzKZYzMdAtqrHDPDth+ERDeHtT8CEHNFNgwH4Drtp7YWlKZyVPsv6dK3iVC5WQ4Smet9VXXpZhT8JcQr97oS6/QJ/gT2yzHqH9vE62bRuuVM3lwDNiZkdn1nVbxa8d58RY3T49As7qmlP5Y43puhyXDWU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvBaB2c/CSsrpPIGSKo/yIA8NKQbrk/1m+GY/Ma4/XG#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX/VzLQPSOCPDMQMb838UxHYaVIDkLBboGMSvw1EX6MmRkAHKbJbJizg3TXu8nfZimb1PW1TRaFLHQkljXQfhA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ3gJW4xxSNpckw2TbtUBxTZruxTxiPlDkOB8Y4ICZA576sHCsss1Ph5y2zOkXYsz9fpf2TwDKPQIVDfUxQL2k42AS2PWqcJCelaMaAxDGDVmytzhvJO+0vO0kZSFoRnDYDxt2IUjJS2VV4xS4L9mRqjK8zsSYyINET0BAxRep9xLeUV0pztWwkopYucpBL9nU+ZMkA5y3nRMxInQNfxZwW5O2P7v+HScnTy2CUe+79l+0TMU0N6uM79jmcAAH5zDqSdRx1VS+lr4cWeNOPxGiXzEepk+MRml6Y0uGKdtdlboqK6kvYfSNkkhFmtXsnvtNQyA8UDSAercKYAeSPfJftqXmHbVvAY+Ky5R22RivRx7jpubqimyS4Tab95yEzsLi6hEQ2OW1pZleLTnr31vNLojOAxtrIY7YgkPSo3yrbURsfLyldLo3LfSlYfkTpkQFE2CajUrAitfcz+uMi9UVw0jCs+cC6uvKZdzu9Flnc8SDq2rMPIHuEP+9CACVSTU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxCaPCuKLUncOQ8c8c4/3OodUXgAR3WjvU4uCVk4XkO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7zHYLiINcKCNo52qkzrmctOgzvnHIchoPMaZyVaf/Aonhb5ntaWhlnHGxOVN+ZUQQOMPIjt7zIO4FB9IYg2xw=#012 create=True mode=0644 path=/tmp/ansible.rnbd6s9d state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:58 np0005539552 python3.9[68970]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.rnbd6s9d' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:59 np0005539552 python3.9[69124]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.rnbd6s9d state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:59 np0005539552 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 02:09:59 np0005539552 systemd[1]: session-17.scope: Consumed 3.327s CPU time.
Nov 29 02:09:59 np0005539552 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Nov 29 02:09:59 np0005539552 systemd-logind[788]: Removed session 17.
Nov 29 02:10:06 np0005539552 systemd-logind[788]: New session 18 of user zuul.
Nov 29 02:10:06 np0005539552 systemd[1]: Started Session 18 of User zuul.
Nov 29 02:10:07 np0005539552 python3.9[69302]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:08 np0005539552 python3.9[69458]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:10:09 np0005539552 python3.9[69612]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:10:13 np0005539552 python3.9[69765]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:13 np0005539552 python3.9[69918]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:15 np0005539552 python3.9[70072]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:16 np0005539552 python3.9[70227]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:10:17 np0005539552 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 02:10:17 np0005539552 systemd[1]: session-18.scope: Consumed 4.414s CPU time.
Nov 29 02:10:17 np0005539552 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Nov 29 02:10:17 np0005539552 systemd-logind[788]: Removed session 18.
Nov 29 02:10:22 np0005539552 systemd-logind[788]: New session 19 of user zuul.
Nov 29 02:10:22 np0005539552 systemd[1]: Started Session 19 of User zuul.
Nov 29 02:10:23 np0005539552 python3.9[70405]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:24 np0005539552 python3.9[70561]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:10:25 np0005539552 python3.9[70645]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:10:28 np0005539552 python3.9[70796]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:29 np0005539552 python3.9[70947]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:10:30 np0005539552 python3.9[71097]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:30 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:10:30 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:10:30 np0005539552 python3.9[71248]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:31 np0005539552 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 02:10:31 np0005539552 systemd[1]: session-19.scope: Consumed 5.795s CPU time.
Nov 29 02:10:31 np0005539552 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Nov 29 02:10:31 np0005539552 systemd-logind[788]: Removed session 19.
Nov 29 02:10:39 np0005539552 systemd-logind[788]: New session 20 of user zuul.
Nov 29 02:10:39 np0005539552 systemd[1]: Started Session 20 of User zuul.
Nov 29 02:10:43 np0005539552 chronyd[58488]: Selected source 199.182.221.110 (pool.ntp.org)
Nov 29 02:10:47 np0005539552 python3[72016]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:49 np0005539552 python3[72111]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:10:50 np0005539552 python3[72138]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:10:51 np0005539552 python3[72164]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:51 np0005539552 kernel: loop: module loaded
Nov 29 02:10:51 np0005539552 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 02:10:52 np0005539552 python3[72199]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:52 np0005539552 lvm[72202]: PV /dev/loop3 not used.
Nov 29 02:10:52 np0005539552 lvm[72211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:52 np0005539552 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 02:10:52 np0005539552 lvm[72213]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 02:10:52 np0005539552 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 02:10:53 np0005539552 python3[72291]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:10:54 np0005539552 python3[72364]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400253.5442903-36983-138247291288298/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:10:55 np0005539552 python3[72414]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:10:55 np0005539552 systemd[1]: Reloading.
Nov 29 02:10:55 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:55 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:55 np0005539552 systemd[1]: Starting Ceph OSD losetup...
Nov 29 02:10:55 np0005539552 bash[72455]: /dev/loop3: [64513]:4327948 (/var/lib/ceph-osd-0.img)
Nov 29 02:10:55 np0005539552 systemd[1]: Finished Ceph OSD losetup.
Nov 29 02:10:55 np0005539552 lvm[72457]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:55 np0005539552 lvm[72457]: VG ceph_vg0 finished
Nov 29 02:10:58 np0005539552 python3[72482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:13:36 np0005539552 systemd-logind[788]: New session 21 of user ceph-admin.
Nov 29 02:13:36 np0005539552 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:13:36 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:13:36 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:13:36 np0005539552 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:13:36 np0005539552 systemd[72536]: Queued start job for default target Main User Target.
Nov 29 02:13:36 np0005539552 systemd-logind[788]: New session 23 of user ceph-admin.
Nov 29 02:13:36 np0005539552 systemd[72536]: Created slice User Application Slice.
Nov 29 02:13:36 np0005539552 systemd[72536]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:13:36 np0005539552 systemd[72536]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:13:36 np0005539552 systemd[72536]: Reached target Paths.
Nov 29 02:13:36 np0005539552 systemd[72536]: Reached target Timers.
Nov 29 02:13:36 np0005539552 systemd[72536]: Starting D-Bus User Message Bus Socket...
Nov 29 02:13:36 np0005539552 systemd[72536]: Starting Create User's Volatile Files and Directories...
Nov 29 02:13:36 np0005539552 systemd[72536]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:13:36 np0005539552 systemd[72536]: Reached target Sockets.
Nov 29 02:13:36 np0005539552 systemd[72536]: Finished Create User's Volatile Files and Directories.
Nov 29 02:13:36 np0005539552 systemd[72536]: Reached target Basic System.
Nov 29 02:13:36 np0005539552 systemd[72536]: Reached target Main User Target.
Nov 29 02:13:36 np0005539552 systemd[72536]: Startup finished in 131ms.
Nov 29 02:13:36 np0005539552 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:13:36 np0005539552 systemd[1]: Started Session 21 of User ceph-admin.
Nov 29 02:13:36 np0005539552 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 02:13:36 np0005539552 systemd-logind[788]: New session 24 of user ceph-admin.
Nov 29 02:13:36 np0005539552 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 02:13:37 np0005539552 systemd-logind[788]: New session 25 of user ceph-admin.
Nov 29 02:13:37 np0005539552 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 02:13:37 np0005539552 systemd-logind[788]: New session 26 of user ceph-admin.
Nov 29 02:13:37 np0005539552 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 02:13:38 np0005539552 systemd-logind[788]: New session 27 of user ceph-admin.
Nov 29 02:13:38 np0005539552 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 02:13:38 np0005539552 systemd-logind[788]: New session 28 of user ceph-admin.
Nov 29 02:13:38 np0005539552 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 02:13:39 np0005539552 systemd-logind[788]: New session 29 of user ceph-admin.
Nov 29 02:13:39 np0005539552 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 02:13:39 np0005539552 systemd-logind[788]: New session 30 of user ceph-admin.
Nov 29 02:13:39 np0005539552 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 02:13:39 np0005539552 systemd-logind[788]: New session 31 of user ceph-admin.
Nov 29 02:13:39 np0005539552 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 02:13:40 np0005539552 systemd-logind[788]: New session 32 of user ceph-admin.
Nov 29 02:13:40 np0005539552 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 02:13:40 np0005539552 systemd-logind[788]: New session 33 of user ceph-admin.
Nov 29 02:13:40 np0005539552 systemd[1]: Started Session 33 of User ceph-admin.
Nov 29 02:13:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:40 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:40 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:41 np0005539552 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73559 (sysctl)
Nov 29 02:14:41 np0005539552 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 02:14:41 np0005539552 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 02:14:42 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:42 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay-compat1066876060-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.501738053 +0000 UTC m=+15.889696812 container create dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_albattani, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:58 np0005539552 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 02:14:58 np0005539552 systemd[1]: Started libpod-conmon-dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a.scope.
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.4867853 +0000 UTC m=+15.874744079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:58 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.60473328 +0000 UTC m=+15.992692059 container init dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.61382427 +0000 UTC m=+16.001783019 container start dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.617141069 +0000 UTC m=+16.005099848 container attach dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_albattani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:14:58 np0005539552 compassionate_albattani[73902]: 167 167
Nov 29 02:14:58 np0005539552 systemd[1]: libpod-dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a.scope: Deactivated successfully.
Nov 29 02:14:58 np0005539552 conmon[73902]: conmon dc8041157dc196b05684 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a.scope/container/memory.events
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.621812637 +0000 UTC m=+16.009771406 container died dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:14:58 np0005539552 systemd[1]: var-lib-containers-storage-overlay-35ba5fbb691ea1131932fd402c0c831b2318ecbeec14e093d253e6462a1d3937-merged.mount: Deactivated successfully.
Nov 29 02:14:58 np0005539552 podman[73837]: 2025-11-29 07:14:58.664992239 +0000 UTC m=+16.052950998 container remove dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_albattani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:58 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:14:58 np0005539552 systemd[1]: libpod-conmon-dc8041157dc196b05684bde12368486b68f52e900d2172330e97dcce7283ec3a.scope: Deactivated successfully.
Nov 29 02:14:58 np0005539552 podman[73925]: 2025-11-29 07:14:58.807589932 +0000 UTC m=+0.041346919 container create 1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:58 np0005539552 systemd[1]: Started libpod-conmon-1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c.scope.
Nov 29 02:14:58 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:14:58 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b04ec55d08a717a70932b256f067167a79a5c012caa898b32b9c35fe461f373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:58 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b04ec55d08a717a70932b256f067167a79a5c012caa898b32b9c35fe461f373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:58 np0005539552 podman[73925]: 2025-11-29 07:14:58.869254112 +0000 UTC m=+0.103011089 container init 1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hawking, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:14:58 np0005539552 podman[73925]: 2025-11-29 07:14:58.877071794 +0000 UTC m=+0.110828781 container start 1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 02:14:58 np0005539552 podman[73925]: 2025-11-29 07:14:58.880941969 +0000 UTC m=+0.114698976 container attach 1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hawking, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:14:58 np0005539552 podman[73925]: 2025-11-29 07:14:58.792057691 +0000 UTC m=+0.025814678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]: [
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:    {
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "available": false,
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "ceph_device": false,
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "lsm_data": {},
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "lvs": [],
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "path": "/dev/sr0",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "rejected_reasons": [
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "Has a FileSystem",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "Insufficient space (<5GB)"
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        ],
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        "sys_api": {
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "actuators": null,
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "device_nodes": "sr0",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "devname": "sr0",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "human_readable_size": "482.00 KB",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "id_bus": "ata",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "model": "QEMU DVD-ROM",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "nr_requests": "2",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "parent": "/dev/sr0",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "partitions": {},
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "path": "/dev/sr0",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "removable": "1",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "rev": "2.5+",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "ro": "0",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "rotational": "1",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "sas_address": "",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "sas_device_handle": "",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "scheduler_mode": "mq-deadline",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "sectors": 0,
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "sectorsize": "2048",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "size": 493568.0,
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "support_discard": "2048",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "type": "disk",
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:            "vendor": "QEMU"
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:        }
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]:    }
Nov 29 02:15:00 np0005539552 gallant_hawking[73941]: ]
Nov 29 02:15:00 np0005539552 systemd[1]: libpod-1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c.scope: Deactivated successfully.
Nov 29 02:15:00 np0005539552 systemd[1]: libpod-1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c.scope: Consumed 1.188s CPU time.
Nov 29 02:15:00 np0005539552 podman[73925]: 2025-11-29 07:15:00.051404158 +0000 UTC m=+1.285161145 container died 1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:15:00 np0005539552 systemd[1]: var-lib-containers-storage-overlay-4b04ec55d08a717a70932b256f067167a79a5c012caa898b32b9c35fe461f373-merged.mount: Deactivated successfully.
Nov 29 02:15:00 np0005539552 podman[73925]: 2025-11-29 07:15:00.925908674 +0000 UTC m=+2.159665681 container remove 1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hawking, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:15:00 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:00 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:00 np0005539552 systemd[1]: libpod-conmon-1974c544b5b2971f2b410c7380eeb22772ef91c4807b916c2c2ee106b7f4bb8c.scope: Deactivated successfully.
Nov 29 02:15:05 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:05 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:05 np0005539552 podman[76770]: 2025-11-29 07:15:05.766450105 +0000 UTC m=+0.037895481 container create 188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:15:05 np0005539552 systemd[1]: Started libpod-conmon-188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa.scope.
Nov 29 02:15:05 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:05 np0005539552 podman[76770]: 2025-11-29 07:15:05.837909043 +0000 UTC m=+0.109354499 container init 188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:05 np0005539552 podman[76770]: 2025-11-29 07:15:05.751167529 +0000 UTC m=+0.022612925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:05 np0005539552 podman[76770]: 2025-11-29 07:15:05.849847962 +0000 UTC m=+0.121293348 container start 188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:05 np0005539552 podman[76770]: 2025-11-29 07:15:05.854576205 +0000 UTC m=+0.126021641 container attach 188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:05 np0005539552 serene_sinoussi[76786]: 167 167
Nov 29 02:15:05 np0005539552 systemd[1]: libpod-188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa.scope: Deactivated successfully.
Nov 29 02:15:05 np0005539552 podman[76770]: 2025-11-29 07:15:05.857062309 +0000 UTC m=+0.128507725 container died 188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:15:06 np0005539552 podman[76770]: 2025-11-29 07:15:06.003052925 +0000 UTC m=+0.274498331 container remove 188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:15:06 np0005539552 systemd[1]: libpod-conmon-188137a29654d9aec49bdd609174214800b5fa4bf3b96d73ba858fafe0f6eefa.scope: Deactivated successfully.
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.106760328 +0000 UTC m=+0.064781887 container create 55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:15:06 np0005539552 systemd[1]: Started libpod-conmon-55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459.scope.
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.081522055 +0000 UTC m=+0.039543694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:06 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f045f5643e2d0299ca5d90fb0a089538227c6d6ebcbccfafc5177cb79374e8/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f045f5643e2d0299ca5d90fb0a089538227c6d6ebcbccfafc5177cb79374e8/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f045f5643e2d0299ca5d90fb0a089538227c6d6ebcbccfafc5177cb79374e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f045f5643e2d0299ca5d90fb0a089538227c6d6ebcbccfafc5177cb79374e8/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.214392032 +0000 UTC m=+0.172413621 container init 55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.224635417 +0000 UTC m=+0.182656976 container start 55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.228819466 +0000 UTC m=+0.186841045 container attach 55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:15:06 np0005539552 systemd[1]: libpod-55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459.scope: Deactivated successfully.
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.31558081 +0000 UTC m=+0.273602379 container died 55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:06 np0005539552 podman[76807]: 2025-11-29 07:15:06.469227284 +0000 UTC m=+0.427248883 container remove 55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:15:06 np0005539552 systemd[1]: libpod-conmon-55441eb75effa06a4e891f956ea76b824f995002a867f8e692d910086adae459.scope: Deactivated successfully.
Nov 29 02:15:06 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:06 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:06 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:06 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:06 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:06 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:06 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:07 np0005539552 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 02:15:07 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:07 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:07 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:07 np0005539552 systemd[1]: Reached target Ceph cluster b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:07 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:07 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:07 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:07 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:07 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:07 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:07 np0005539552 systemd[1]: Created slice Slice /system/ceph-b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:07 np0005539552 systemd[1]: Reached target System Time Set.
Nov 29 02:15:07 np0005539552 systemd[1]: Reached target System Time Synchronized.
Nov 29 02:15:07 np0005539552 systemd[1]: Starting Ceph mon.compute-2 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:15:07 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:07 np0005539552 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:15:08 np0005539552 podman[77101]: 2025-11-29 07:15:08.11935947 +0000 UTC m=+0.050770394 container create 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:15:08 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c361465d20a661f3e8df67c1d74e12e0526f246314ae3e9e63ec4b40d009441c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:08 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c361465d20a661f3e8df67c1d74e12e0526f246314ae3e9e63ec4b40d009441c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:08 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c361465d20a661f3e8df67c1d74e12e0526f246314ae3e9e63ec4b40d009441c/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:08 np0005539552 podman[77101]: 2025-11-29 07:15:08.094924468 +0000 UTC m=+0.026335432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:08 np0005539552 podman[77101]: 2025-11-29 07:15:08.198772325 +0000 UTC m=+0.130183279 container init 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:08 np0005539552 podman[77101]: 2025-11-29 07:15:08.204220426 +0000 UTC m=+0.135631360 container start 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: pidfile_write: ignore empty --pid-file
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: load: jerasure load: lrc 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Git sha 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: DB SUMMARY
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: DB Session ID:  AE0I8NBVSMFLPQVY2DCL
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                                     Options.env: 0x560bf00a5c40
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                                Options.info_log: 0x560bf13c4fc0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                    Options.write_buffer_manager: 0x560bf13d4b40
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                               Options.row_cache: None
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                              Options.wal_filter: None
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.wal_compression: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Compression algorithms supported:
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kZSTD supported: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:           Options.merge_operator: 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560bf13c4c00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560bf13bd1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.compression: NoCompression
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fb0bb604-2277-410b-a16a-74e952f23481
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400508244292, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400508288193, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400508288362, "job": 1, "event": "recovery_finished"}
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 02:15:08 np0005539552 bash[77101]: 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d
Nov 29 02:15:08 np0005539552 systemd[1]: Started Ceph mon.compute-2 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560bf13e6e00
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: DB pointer 0x560bf1470000
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(???) e0 preinit fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).mds e1 new map
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e29 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e29 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e29 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).osd e29 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Deploying daemon mon.compute-2 on compute-2
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:15:08 np0005539552 ceph-mon[77121]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 29 02:15:10 np0005539552 ceph-mon[77121]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:10 np0005539552 ceph-mon[77121]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:10 np0005539552 ceph-mon[77121]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Nov 29 02:15:10 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:15:10 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 02:15:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 02:15:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 02:15:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:13 np0005539552 ceph-mon[77121]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:15:06.261167Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: Deploying daemon mon.compute-1 on compute-1
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/206725884' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:14 np0005539552 ceph-mon[77121]:    application not enabled on pool 'backups'
Nov 29 02:15:14 np0005539552 ceph-mon[77121]:    application not enabled on pool 'images'
Nov 29 02:15:14 np0005539552 ceph-mon[77121]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:14 np0005539552 ceph-mon[77121]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:14 np0005539552 ceph-mon[77121]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(10) init, last seen epoch 10
Nov 29 02:15:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:20 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:15:20 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:15:20 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:20 np0005539552 ceph-mon[77121]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:21.022335836 +0000 UTC m=+0.044293796 container create 4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:15:21 np0005539552 systemd[1]: Started libpod-conmon-4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1.scope.
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:20.999946167 +0000 UTC m=+0.021904147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:21 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:21.117296123 +0000 UTC m=+0.139254133 container init 4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:21.124453788 +0000 UTC m=+0.146411748 container start 4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:21.128647587 +0000 UTC m=+0.150605597 container attach 4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:15:21 np0005539552 elated_sutherland[77316]: 167 167
Nov 29 02:15:21 np0005539552 systemd[1]: libpod-4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1.scope: Deactivated successfully.
Nov 29 02:15:21 np0005539552 conmon[77316]: conmon 4d6a033406057ea71999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1.scope/container/memory.events
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:21.131931011 +0000 UTC m=+0.153889011 container died 4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:15:21 np0005539552 systemd[1]: var-lib-containers-storage-overlay-6ed244830dda5952f3cc594f8441a789f21a490cfce77d47536881367543bb29-merged.mount: Deactivated successfully.
Nov 29 02:15:21 np0005539552 podman[77300]: 2025-11-29 07:15:21.172802009 +0000 UTC m=+0.194759969 container remove 4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:15:21 np0005539552 systemd[1]: libpod-conmon-4d6a033406057ea71999c356ef62b37bfb70889b70120a7b909233a28cbde1e1.scope: Deactivated successfully.
Nov 29 02:15:21 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:21 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:21 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:21 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:21 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:15:21 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(12) init, last seen epoch 12
Nov 29 02:15:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:21 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:21 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:21 np0005539552 systemd[1]: Starting Ceph mgr.compute-2.zfrvoq for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:15:22 np0005539552 podman[77460]: 2025-11-29 07:15:22.063603552 +0000 UTC m=+0.045706884 container create 41f46c9921edb4caf4d30cdb41a9a2e2b1e83544a135db217157c0f688707a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:22 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53969680043bf663f8520dd2c427cb31cd7808e67a1e2d16cea09616f5cf89c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:22 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53969680043bf663f8520dd2c427cb31cd7808e67a1e2d16cea09616f5cf89c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:22 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53969680043bf663f8520dd2c427cb31cd7808e67a1e2d16cea09616f5cf89c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:22 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53969680043bf663f8520dd2c427cb31cd7808e67a1e2d16cea09616f5cf89c1/merged/var/lib/ceph/mgr/ceph-compute-2.zfrvoq supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:22 np0005539552 podman[77460]: 2025-11-29 07:15:22.124753074 +0000 UTC m=+0.106856426 container init 41f46c9921edb4caf4d30cdb41a9a2e2b1e83544a135db217157c0f688707a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:15:22 np0005539552 podman[77460]: 2025-11-29 07:15:22.1296356 +0000 UTC m=+0.111738922 container start 41f46c9921edb4caf4d30cdb41a9a2e2b1e83544a135db217157c0f688707a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:15:22 np0005539552 bash[77460]: 41f46c9921edb4caf4d30cdb41a9a2e2b1e83544a135db217157c0f688707a05
Nov 29 02:15:22 np0005539552 podman[77460]: 2025-11-29 07:15:22.041053118 +0000 UTC m=+0.023156460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:22 np0005539552 systemd[1]: Started Ceph mgr.compute-2.zfrvoq for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539552 ceph-mgr[77480]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:15:22 np0005539552 ceph-mgr[77480]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:15:22 np0005539552 ceph-mgr[77480]: pidfile_write: ignore empty --pid-file
Nov 29 02:15:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'alerts'
Nov 29 02:15:23 np0005539552 ceph-mgr[77480]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:15:23 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'balancer'
Nov 29 02:15:23 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:23.224+0000 7f52117e0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:15:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e30 _set_new_cache_sizes cache_size:1019933567 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:23 np0005539552 ceph-mgr[77480]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:15:23 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'cephadm'
Nov 29 02:15:23 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:23.510+0000 7f52117e0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:15:25 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'crash'
Nov 29 02:15:25 np0005539552 ceph-mgr[77480]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:15:25 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:25.707+0000 7f52117e0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:15:25 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'dashboard'
Nov 29 02:15:27 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:15:27 np0005539552 ceph-mgr[77480]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:15:27 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:27.481+0000 7f52117e0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:15:27 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:15:27 np0005539552 ceph-mon[77121]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539552 ceph-mon[77121]:    application not enabled on pool 'backups'
Nov 29 02:15:27 np0005539552 ceph-mon[77121]:    application not enabled on pool 'images'
Nov 29 02:15:27 np0005539552 ceph-mon[77121]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:27 np0005539552 ceph-mon[77121]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:27 np0005539552 ceph-mon[77121]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: Deploying daemon mgr.compute-2.zfrvoq on compute-2
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:28 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:15:28 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:15:28 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]:  from numpy import show_config as show_numpy_config
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'influx'
Nov 29 02:15:28 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:28.061+0000 7f52117e0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:15:28 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:28.302+0000 7f52117e0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'insights'
Nov 29 02:15:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e30 _set_new_cache_sizes cache_size:1020053241 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'iostat'
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:15:28 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:28.856+0000 7f52117e0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:15:28 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 29 02:15:29 np0005539552 ceph-mon[77121]:    application not enabled on pool 'images'
Nov 29 02:15:29 np0005539552 ceph-mon[77121]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:29 np0005539552 ceph-mon[77121]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:29 np0005539552 ceph-mon[77121]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: Deploying daemon mgr.compute-1.fchyan on compute-1
Nov 29 02:15:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e31 e31: 2 total, 2 up, 2 in
Nov 29 02:15:30 np0005539552 podman[77657]: 2025-11-29 07:15:30.479207215 +0000 UTC m=+0.023746736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:30 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'localpool'
Nov 29 02:15:30 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:15:31 np0005539552 podman[77657]: 2025-11-29 07:15:31.394679266 +0000 UTC m=+0.939218797 container create ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:31 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'mirroring'
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:15:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:15:31 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'nfs'
Nov 29 02:15:31 np0005539552 systemd[1]: Started libpod-conmon-ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4.scope.
Nov 29 02:15:31 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e32 e32: 2 total, 2 up, 2 in
Nov 29 02:15:32 np0005539552 ceph-mon[77121]: Deploying daemon crash.compute-2 on compute-2
Nov 29 02:15:32 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:15:32 np0005539552 podman[77657]: 2025-11-29 07:15:32.030799722 +0000 UTC m=+1.575339263 container init ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brahmagupta, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:15:32 np0005539552 podman[77657]: 2025-11-29 07:15:32.039310222 +0000 UTC m=+1.583849763 container start ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:32 np0005539552 podman[77657]: 2025-11-29 07:15:32.045752468 +0000 UTC m=+1.590291979 container attach ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:15:32 np0005539552 brave_brahmagupta[77673]: 167 167
Nov 29 02:15:32 np0005539552 systemd[1]: libpod-ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4.scope: Deactivated successfully.
Nov 29 02:15:32 np0005539552 podman[77657]: 2025-11-29 07:15:32.048363796 +0000 UTC m=+1.592903307 container died ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:32 np0005539552 systemd[1]: var-lib-containers-storage-overlay-4a3771fdf5f2fe896fb0479bfd55651bafe10e770d0b5b0c5aaeb9901c0f5b64-merged.mount: Deactivated successfully.
Nov 29 02:15:32 np0005539552 podman[77657]: 2025-11-29 07:15:32.129050563 +0000 UTC m=+1.673590104 container remove ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brahmagupta, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:15:32 np0005539552 systemd[1]: libpod-conmon-ac5182d8d9f9d1d67e39c124b35f940a700f7edb1150e674838b6b89a09a9fc4.scope: Deactivated successfully.
Nov 29 02:15:32 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:32 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:32 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:32 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:32 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:32 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:32 np0005539552 ceph-mgr[77480]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:15:32 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:15:32 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:32.587+0000 7f52117e0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:15:32 np0005539552 systemd[1]: Starting Ceph crash.compute-2 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:15:33 np0005539552 podman[77818]: 2025-11-29 07:15:33.021885379 +0000 UTC m=+0.094579877 container create 805bc584ce25c3ddefc2d4a448c39d0660ae1fe46bd3d8c54cf9ff99c167bf49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:33 np0005539552 podman[77818]: 2025-11-29 07:15:32.948912072 +0000 UTC m=+0.021606580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:33 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1732d77e22b0ef81983cd398793bcde15c4e2e5a75b2a8d3ae81971c88783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:33 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1732d77e22b0ef81983cd398793bcde15c4e2e5a75b2a8d3ae81971c88783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:33 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1732d77e22b0ef81983cd398793bcde15c4e2e5a75b2a8d3ae81971c88783/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:33 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1732d77e22b0ef81983cd398793bcde15c4e2e5a75b2a8d3ae81971c88783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:33 np0005539552 podman[77818]: 2025-11-29 07:15:33.1312979 +0000 UTC m=+0.203992388 container init 805bc584ce25c3ddefc2d4a448c39d0660ae1fe46bd3d8c54cf9ff99c167bf49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 02:15:33 np0005539552 podman[77818]: 2025-11-29 07:15:33.136924125 +0000 UTC m=+0.209618593 container start 805bc584ce25c3ddefc2d4a448c39d0660ae1fe46bd3d8c54cf9ff99c167bf49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:33 np0005539552 bash[77818]: 805bc584ce25c3ddefc2d4a448c39d0660ae1fe46bd3d8c54cf9ff99c167bf49
Nov 29 02:15:33 np0005539552 systemd[1]: Started Ceph crash.compute-2 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e32 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:33 np0005539552 ceph-mgr[77480]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:33.333+0000 7f52117e0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.524+0000 7f99a8f6d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.524+0000 7f99a8f6d640 -1 AuthRegistry(0x7f99a4067150) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.525+0000 7f99a8f6d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.525+0000 7f99a8f6d640 -1 AuthRegistry(0x7f99a8f6c000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.526+0000 7f99a2d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.527+0000 7f99a1d74640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.527+0000 7f99a2575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: 2025-11-29T07:15:33.527+0000 7f99a8f6d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-2[77833]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 02:15:33 np0005539552 ceph-mgr[77480]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'osd_support'
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:33.618+0000 7f52117e0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e33 e33: 2 total, 2 up, 2 in
Nov 29 02:15:33 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:15:33 np0005539552 ceph-mgr[77480]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:15:33 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:15:33 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:33.854+0000 7f52117e0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:15:34 np0005539552 ceph-mgr[77480]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:15:34 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'progress'
Nov 29 02:15:34 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:34.133+0000 7f52117e0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:15:34 np0005539552 ceph-mgr[77480]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:15:34 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'prometheus'
Nov 29 02:15:34 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:34.383+0000 7f52117e0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.030714083 +0000 UTC m=+0.036983437 container create 3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:35 np0005539552 systemd[1]: Started libpod-conmon-3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea.scope.
Nov 29 02:15:35 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.013842007 +0000 UTC m=+0.020111381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.138990924 +0000 UTC m=+0.145260298 container init 3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.147645188 +0000 UTC m=+0.153914542 container start 3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.151631591 +0000 UTC m=+0.157900945 container attach 3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:15:35 np0005539552 vigilant_kapitsa[78008]: 167 167
Nov 29 02:15:35 np0005539552 systemd[1]: libpod-3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea.scope: Deactivated successfully.
Nov 29 02:15:35 np0005539552 conmon[78008]: conmon 3e9e923fdbcd68741a30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea.scope/container/memory.events
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.157606626 +0000 UTC m=+0.163875990 container died 3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:15:35 np0005539552 systemd[1]: var-lib-containers-storage-overlay-84880430726bd4f3756eb6878e25dafd8b54702657eba74141976e2e3791ec3c-merged.mount: Deactivated successfully.
Nov 29 02:15:35 np0005539552 podman[77991]: 2025-11-29 07:15:35.191822481 +0000 UTC m=+0.198091835 container remove 3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:35 np0005539552 systemd[1]: libpod-conmon-3e9e923fdbcd68741a30146b6dac7ee7430cc1b5329576646a2b4f04112938ea.scope: Deactivated successfully.
Nov 29 02:15:35 np0005539552 podman[78032]: 2025-11-29 07:15:35.356872391 +0000 UTC m=+0.045586961 container create cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:15:35 np0005539552 systemd[1]: Started libpod-conmon-cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8.scope.
Nov 29 02:15:35 np0005539552 ceph-mgr[77480]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:15:35 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:35.427+0000 7f52117e0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539552 podman[78032]: 2025-11-29 07:15:35.337721485 +0000 UTC m=+0.026436075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:35 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:35 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96346c0f0c2d59ab71766997e5bee52456a665baf9005cc17c0a84eb9165dfe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96346c0f0c2d59ab71766997e5bee52456a665baf9005cc17c0a84eb9165dfe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96346c0f0c2d59ab71766997e5bee52456a665baf9005cc17c0a84eb9165dfe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96346c0f0c2d59ab71766997e5bee52456a665baf9005cc17c0a84eb9165dfe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96346c0f0c2d59ab71766997e5bee52456a665baf9005cc17c0a84eb9165dfe0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539552 podman[78032]: 2025-11-29 07:15:35.497923329 +0000 UTC m=+0.186637919 container init cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:15:35 np0005539552 podman[78032]: 2025-11-29 07:15:35.506974343 +0000 UTC m=+0.195688913 container start cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:35 np0005539552 podman[78032]: 2025-11-29 07:15:35.516408247 +0000 UTC m=+0.205122837 container attach cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:35 np0005539552 ceph-mgr[77480]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:35.802+0000 7f52117e0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:15:35 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'restful'
Nov 29 02:15:36 np0005539552 nifty_murdock[78049]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:15:36 np0005539552 nifty_murdock[78049]: --> relative data size: 1.0
Nov 29 02:15:36 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:15:36 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 681bc90e-5cd2-4106-9be9-9995623a17e0
Nov 29 02:15:36 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'rgw'
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"} v 0) v1
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1211735059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:36 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.102:0/1211735059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 29 02:15:37 np0005539552 lvm[78097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:15:37 np0005539552 lvm[78097]: VG ceph_vg0 finished
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 29 02:15:37 np0005539552 systemd[72536]: Starting Mark boot as successful...
Nov 29 02:15:37 np0005539552 systemd[72536]: Finished Mark boot as successful.
Nov 29 02:15:37 np0005539552 ceph-mgr[77480]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:15:37 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'rook'
Nov 29 02:15:37 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:37.299+0000 7f52117e0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:15:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 02:15:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3558326765' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: stderr: got monmap epoch 3
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: --> Creating keyring file for osd.2
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 29 02:15:37 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 681bc90e-5cd2-4106-9be9-9995623a17e0 --setuser ceph --setgroup ceph
Nov 29 02:15:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:38 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]': finished
Nov 29 02:15:39 np0005539552 ceph-mgr[77480]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:15:39 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'selftest'
Nov 29 02:15:39 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:39.505+0000 7f52117e0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:15:39 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:15:39 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:15:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:39 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:39.793+0000 7f52117e0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:15:39 np0005539552 ceph-mgr[77480]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:15:39 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'stats'
Nov 29 02:15:40 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:40.047+0000 7f52117e0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'status'
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'telegraf'
Nov 29 02:15:40 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:40.600+0000 7f52117e0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: stderr: 2025-11-29T07:15:37.632+0000 7f1ce0a60740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: stderr: 2025-11-29T07:15:37.632+0000 7f1ce0a60740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: stderr: 2025-11-29T07:15:37.633+0000 7f1ce0a60740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: stderr: 2025-11-29T07:15:37.633+0000 7f1ce0a60740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'telemetry'
Nov 29 02:15:40 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:40.873+0000 7f52117e0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 29 02:15:40 np0005539552 nifty_murdock[78049]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 02:15:40 np0005539552 systemd[1]: libpod-cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8.scope: Deactivated successfully.
Nov 29 02:15:40 np0005539552 systemd[1]: libpod-cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8.scope: Consumed 2.418s CPU time.
Nov 29 02:15:40 np0005539552 podman[78032]: 2025-11-29 07:15:40.99009245 +0000 UTC m=+5.678807050 container died cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:15:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay-96346c0f0c2d59ab71766997e5bee52456a665baf9005cc17c0a84eb9165dfe0-merged.mount: Deactivated successfully.
Nov 29 02:15:41 np0005539552 podman[78032]: 2025-11-29 07:15:41.119333654 +0000 UTC m=+5.808048244 container remove cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:41 np0005539552 systemd[1]: libpod-conmon-cb30a8b8cfabff342925a51d563a9fac2c5725e39dbd708e2f16b7b0eea796e8.scope: Deactivated successfully.
Nov 29 02:15:41 np0005539552 ceph-mgr[77480]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:15:41 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:41.514+0000 7f52117e0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.750677045 +0000 UTC m=+0.057489028 container create 51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:41 np0005539552 systemd[1]: Started libpod-conmon-51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8.scope.
Nov 29 02:15:41 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.719831347 +0000 UTC m=+0.026643400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.826032265 +0000 UTC m=+0.132844248 container init 51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.837220454 +0000 UTC m=+0.144032407 container start 51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.84092203 +0000 UTC m=+0.147734013 container attach 51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:15:41 np0005539552 zealous_pascal[79178]: 167 167
Nov 29 02:15:41 np0005539552 systemd[1]: libpod-51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8.scope: Deactivated successfully.
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.846270238 +0000 UTC m=+0.153082231 container died 51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay-e82ff1b0d30e7e344c316871b9b6aff541d7f7c01ac38fe4c6181a311f6dabee-merged.mount: Deactivated successfully.
Nov 29 02:15:41 np0005539552 podman[79162]: 2025-11-29 07:15:41.892876384 +0000 UTC m=+0.199688377 container remove 51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_pascal, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:41 np0005539552 systemd[1]: libpod-conmon-51acdff240bcdf15d77ff972e190ccb9d6151f64d653be8a1e670d30fe03dcb8.scope: Deactivated successfully.
Nov 29 02:15:42 np0005539552 podman[79201]: 2025-11-29 07:15:42.074263386 +0000 UTC m=+0.058240037 container create 4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lehmann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:15:42 np0005539552 systemd[1]: Started libpod-conmon-4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85.scope.
Nov 29 02:15:42 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:42 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f81db2b9b40c0fb17a76296f16250136792463dfae432399538939ceff0ead/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:42 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f81db2b9b40c0fb17a76296f16250136792463dfae432399538939ceff0ead/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:42 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f81db2b9b40c0fb17a76296f16250136792463dfae432399538939ceff0ead/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:42 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f81db2b9b40c0fb17a76296f16250136792463dfae432399538939ceff0ead/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:42 np0005539552 podman[79201]: 2025-11-29 07:15:42.051799815 +0000 UTC m=+0.035776466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:42 np0005539552 podman[79201]: 2025-11-29 07:15:42.16257188 +0000 UTC m=+0.146548541 container init 4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:42 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/278235459' entity='client.admin' 
Nov 29 02:15:42 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539552 podman[79201]: 2025-11-29 07:15:42.168686079 +0000 UTC m=+0.152662750 container start 4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:42 np0005539552 podman[79201]: 2025-11-29 07:15:42.172567129 +0000 UTC m=+0.156543800 container attach 4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lehmann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:15:42 np0005539552 ceph-mgr[77480]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:42 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'volumes'
Nov 29 02:15:42 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:42.218+0000 7f52117e0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]: {
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:    "2": [
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:        {
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "devices": [
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "/dev/loop3"
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            ],
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "lv_name": "ceph_lv0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "lv_size": "7511998464",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lKdC8x-4CpL-VfcD-RdXE-5pd5-63RY-13zJYz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=681bc90e-5cd2-4106-9be9-9995623a17e0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "lv_uuid": "lKdC8x-4CpL-VfcD-RdXE-5pd5-63RY-13zJYz",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "name": "ceph_lv0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "tags": {
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.block_uuid": "lKdC8x-4CpL-VfcD-RdXE-5pd5-63RY-13zJYz",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.cluster_name": "ceph",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.crush_device_class": "",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.encrypted": "0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.osd_fsid": "681bc90e-5cd2-4106-9be9-9995623a17e0",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.osd_id": "2",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.type": "block",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:                "ceph.vdo": "0"
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            },
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "type": "block",
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:            "vg_name": "ceph_vg0"
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:        }
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]:    ]
Nov 29 02:15:42 np0005539552 cool_lehmann[79217]: }
Nov 29 02:15:42 np0005539552 ceph-mgr[77480]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:15:42 np0005539552 ceph-mgr[77480]: mgr[py] Loading python module 'zabbix'
Nov 29 02:15:42 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:42.954+0000 7f52117e0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:15:42 np0005539552 systemd[1]: libpod-4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85.scope: Deactivated successfully.
Nov 29 02:15:43 np0005539552 podman[79226]: 2025-11-29 07:15:43.005869695 +0000 UTC m=+0.027327688 container died 4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:43 np0005539552 systemd[1]: var-lib-containers-storage-overlay-a3f81db2b9b40c0fb17a76296f16250136792463dfae432399538939ceff0ead-merged.mount: Deactivated successfully.
Nov 29 02:15:43 np0005539552 ceph-mgr[77480]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:15:43 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-2-zfrvoq[77476]: 2025-11-29T07:15:43.212+0000 7f52117e0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:15:43 np0005539552 ceph-mgr[77480]: ms_deliver_dispatch: unhandled message 0x555d866eb600 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 02:15:43 np0005539552 podman[79226]: 2025-11-29 07:15:43.215785575 +0000 UTC m=+0.237243538 container remove 4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lehmann, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:15:43 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:15:43 np0005539552 systemd[1]: libpod-conmon-4cea4b5cdc6dc53e36502da2d2a7402fdf07d873d5238de2f1f10ce62ef47d85.scope: Deactivated successfully.
Nov 29 02:15:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:43 np0005539552 ceph-mon[77121]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.053768362 +0000 UTC m=+0.063288668 container create 686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:44 np0005539552 systemd[1]: Started libpod-conmon-686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8.scope.
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.008627665 +0000 UTC m=+0.018148001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:44 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.229131748 +0000 UTC m=+0.238652084 container init 686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.239931597 +0000 UTC m=+0.249451913 container start 686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_vaughan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:44 np0005539552 sharp_vaughan[79397]: 167 167
Nov 29 02:15:44 np0005539552 systemd[1]: libpod-686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8.scope: Deactivated successfully.
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.475887311 +0000 UTC m=+0.485407647 container attach 686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.476217499 +0000 UTC m=+0.485737815 container died 686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:15:44 np0005539552 ceph-mon[77121]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:15:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:15:44 np0005539552 ceph-mon[77121]: Deploying daemon osd.2 on compute-2
Nov 29 02:15:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d884bd8bbcfff2e70b73b7bb3f5b59767da1aae15df47774394013bd8d481f56-merged.mount: Deactivated successfully.
Nov 29 02:15:44 np0005539552 podman[79381]: 2025-11-29 07:15:44.955012635 +0000 UTC m=+0.964532941 container remove 686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:15:45 np0005539552 systemd[1]: libpod-conmon-686b1d092b63703731b9a101da7f44bcc3439fbb09ebdcd2ac64f0234876edc8.scope: Deactivated successfully.
Nov 29 02:15:45 np0005539552 podman[79429]: 2025-11-29 07:15:45.337762936 +0000 UTC m=+0.083701936 container create 4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:45 np0005539552 podman[79429]: 2025-11-29 07:15:45.28995743 +0000 UTC m=+0.035896440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:45 np0005539552 systemd[1]: Started libpod-conmon-4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163.scope.
Nov 29 02:15:45 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da19a05a24d6f84565ef06e860786d33ecf56714d6e65c5aea4e0129259188a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da19a05a24d6f84565ef06e860786d33ecf56714d6e65c5aea4e0129259188a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da19a05a24d6f84565ef06e860786d33ecf56714d6e65c5aea4e0129259188a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da19a05a24d6f84565ef06e860786d33ecf56714d6e65c5aea4e0129259188a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da19a05a24d6f84565ef06e860786d33ecf56714d6e65c5aea4e0129259188a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539552 podman[79429]: 2025-11-29 07:15:45.458066438 +0000 UTC m=+0.204005468 container init 4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:15:45 np0005539552 podman[79429]: 2025-11-29 07:15:45.467365059 +0000 UTC m=+0.213304069 container start 4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:45 np0005539552 podman[79429]: 2025-11-29 07:15:45.520152044 +0000 UTC m=+0.266091054 container attach 4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:46 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test[79446]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 02:15:46 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test[79446]:                            [--no-systemd] [--no-tmpfs]
Nov 29 02:15:46 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test[79446]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 02:15:46 np0005539552 systemd[1]: libpod-4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163.scope: Deactivated successfully.
Nov 29 02:15:46 np0005539552 podman[79429]: 2025-11-29 07:15:46.120433253 +0000 UTC m=+0.866372253 container died 4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:15:46 np0005539552 systemd[1]: var-lib-containers-storage-overlay-7da19a05a24d6f84565ef06e860786d33ecf56714d6e65c5aea4e0129259188a-merged.mount: Deactivated successfully.
Nov 29 02:15:46 np0005539552 podman[79429]: 2025-11-29 07:15:46.421049739 +0000 UTC m=+1.166988739 container remove 4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:15:46 np0005539552 systemd[1]: libpod-conmon-4afa3bb9db04977d61a932dacc5cd3acc8f13e2651268ea2f2251e68b4ea5163.scope: Deactivated successfully.
Nov 29 02:15:46 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:46 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:46 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:47 np0005539552 systemd[1]: Reloading.
Nov 29 02:15:47 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:15:47 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:15:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e2 new map
Nov 29 02:15:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:15:46.469738+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 02:15:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 02:15:47 np0005539552 systemd[1]: Starting Ceph osd.2 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:15:47 np0005539552 podman[79607]: 2025-11-29 07:15:47.898187231 +0000 UTC m=+0.071046239 container create d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:15:47 np0005539552 podman[79607]: 2025-11-29 07:15:47.852101448 +0000 UTC m=+0.024960466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:47 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cac9fbfa2a5446eb254a88fab1593f2b1f714935bcb1900a80d59f8d2c84806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cac9fbfa2a5446eb254a88fab1593f2b1f714935bcb1900a80d59f8d2c84806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cac9fbfa2a5446eb254a88fab1593f2b1f714935bcb1900a80d59f8d2c84806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cac9fbfa2a5446eb254a88fab1593f2b1f714935bcb1900a80d59f8d2c84806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cac9fbfa2a5446eb254a88fab1593f2b1f714935bcb1900a80d59f8d2c84806/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:48 np0005539552 podman[79607]: 2025-11-29 07:15:48.044409333 +0000 UTC m=+0.217268371 container init d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:15:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:15:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:48 np0005539552 ceph-mon[77121]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:15:48 np0005539552 ceph-mon[77121]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:15:48 np0005539552 podman[79607]: 2025-11-29 07:15:48.055775017 +0000 UTC m=+0.228634025 container start d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:15:48 np0005539552 podman[79607]: 2025-11-29 07:15:48.111675843 +0000 UTC m=+0.284534941 container attach d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:15:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:48 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 02:15:48 np0005539552 bash[79607]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 02:15:48 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:15:48 np0005539552 bash[79607]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:15:49 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:15:49 np0005539552 bash[79607]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:15:49 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:15:49 np0005539552 bash[79607]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:15:49 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:49 np0005539552 bash[79607]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:49 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 02:15:49 np0005539552 bash[79607]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 02:15:49 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate[79622]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 02:15:49 np0005539552 bash[79607]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 02:15:49 np0005539552 systemd[1]: libpod-d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097.scope: Deactivated successfully.
Nov 29 02:15:49 np0005539552 systemd[1]: libpod-d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097.scope: Consumed 1.042s CPU time.
Nov 29 02:15:49 np0005539552 podman[79607]: 2025-11-29 07:15:49.079647952 +0000 UTC m=+1.252506980 container died d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 02:15:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:15:49 np0005539552 ceph-mon[77121]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay-5cac9fbfa2a5446eb254a88fab1593f2b1f714935bcb1900a80d59f8d2c84806-merged.mount: Deactivated successfully.
Nov 29 02:15:49 np0005539552 podman[79607]: 2025-11-29 07:15:49.383225725 +0000 UTC m=+1.556084733 container remove d9bbb3ce00c23167348b5afe3a35b949765771ddc11cab43eb845cb874345097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:49 np0005539552 podman[79780]: 2025-11-29 07:15:49.55736559 +0000 UTC m=+0.021612910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:49 np0005539552 podman[79780]: 2025-11-29 07:15:49.834792346 +0000 UTC m=+0.299039686 container create b3beb659a1c189f7513ac7d010393b99d28c34611bbfcc54f2ab6dc895c4896f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:49 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811e9507745b665eecb9321e505c5685e6c9140b9c98dba6e1030b65df33cd5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:49 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811e9507745b665eecb9321e505c5685e6c9140b9c98dba6e1030b65df33cd5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:49 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811e9507745b665eecb9321e505c5685e6c9140b9c98dba6e1030b65df33cd5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:49 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811e9507745b665eecb9321e505c5685e6c9140b9c98dba6e1030b65df33cd5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:49 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811e9507745b665eecb9321e505c5685e6c9140b9c98dba6e1030b65df33cd5c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:50 np0005539552 podman[79780]: 2025-11-29 07:15:50.007310279 +0000 UTC m=+0.471557629 container init b3beb659a1c189f7513ac7d010393b99d28c34611bbfcc54f2ab6dc895c4896f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:15:50 np0005539552 podman[79780]: 2025-11-29 07:15:50.013102389 +0000 UTC m=+0.477349699 container start b3beb659a1c189f7513ac7d010393b99d28c34611bbfcc54f2ab6dc895c4896f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: pidfile_write: ignore empty --pid-file
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbab5c9c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbab5c9c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbab5c9c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbab5c9c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac3df000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac3df000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac3df000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac3df000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac3df000 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbab5c9c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 02:15:50 np0005539552 bash[79780]: b3beb659a1c189f7513ac7d010393b99d28c34611bbfcc54f2ab6dc895c4896f
Nov 29 02:15:50 np0005539552 systemd[1]: Started Ceph osd.2 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: load: jerasure load: lrc 
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 02:15:50 np0005539552 ceph-mon[77121]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:15:50 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac464c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs mount
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs mount shared_bdev_used = 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Git sha 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DB SUMMARY
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DB Session ID:  2812XRWN63JIDQNL7YOX
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                     Options.env: 0x55cbac467ce0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                Options.info_log: 0x55cbab654ea0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.write_buffer_manager: 0x55cbac53c460
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.row_cache: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                              Options.wal_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.wal_compression: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Compression algorithms supported:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kZSTD supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655560)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab655520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7da67937-375e-4d80-97a3-6d39ffc86279
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551185809, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551186060, "job": 1, "event": "recovery_finished"}
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: freelist init
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: freelist _read_cfg
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs umount
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.293874289 +0000 UTC m=+0.079334894 container create 12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.235456068 +0000 UTC m=+0.020916753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:51 np0005539552 systemd[1]: Started libpod-conmon-12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45.scope.
Nov 29 02:15:51 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bdev(0x55cbac465400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs mount
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluefs mount shared_bdev_used = 4718592
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Git sha 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DB SUMMARY
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DB Session ID:  2812XRWN63JIDQNL7YOW
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                     Options.env: 0x55cbac467f80
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                Options.info_log: 0x55cbab6589e0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.write_buffer_manager: 0x55cbac53c6e0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.row_cache: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                              Options.wal_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.wal_compression: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Compression algorithms supported:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kZSTD supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbab623920)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbac45f300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbac45f300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:           Options.merge_operator: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbac45f300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbab63d4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.compression: LZ4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.num_levels: 7
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7da67937-375e-4d80-97a3-6d39ffc86279
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551427584, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.435493082 +0000 UTC m=+0.220953677 container init 12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.443950831 +0000 UTC m=+0.229411416 container start 12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:15:51 np0005539552 stupefied_franklin[80176]: 167 167
Nov 29 02:15:51 np0005539552 systemd[1]: libpod-12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45.scope: Deactivated successfully.
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551454422, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400551, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7da67937-375e-4d80-97a3-6d39ffc86279", "db_session_id": "2812XRWN63JIDQNL7YOW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.482402106 +0000 UTC m=+0.267862701 container attach 12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.48371752 +0000 UTC m=+0.269178135 container died 12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_franklin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551500398, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400551, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7da67937-375e-4d80-97a3-6d39ffc86279", "db_session_id": "2812XRWN63JIDQNL7YOW", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551537296, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400551, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7da67937-375e-4d80-97a3-6d39ffc86279", "db_session_id": "2812XRWN63JIDQNL7YOW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400551558584, "job": 1, "event": "recovery_finished"}
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 02:15:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay-88e430d242690b71e484a939384b738a1daa0cdbb679db7670f8857e8d1ff6a9-merged.mount: Deactivated successfully.
Nov 29 02:15:51 np0005539552 podman[80160]: 2025-11-29 07:15:51.719254033 +0000 UTC m=+0.504714658 container remove 12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cbac627c00
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: DB pointer 0x55cbab677a00
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 460.80 MB usag
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: _get_class not permitted to load lua
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: _get_class not permitted to load sdk
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: _get_class not permitted to load test_remote_reads
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 load_pgs
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 load_pgs opened 0 pgs
Nov 29 02:15:51 np0005539552 ceph-osd[79800]: osd.2 0 log_to_monitors true
Nov 29 02:15:51 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2[79796]: 2025-11-29T07:15:51.761+0000 7f9f49a95740 -1 osd.2 0 log_to_monitors true
Nov 29 02:15:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 02:15:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:51 np0005539552 systemd[1]: libpod-conmon-12dee6cbdd7a181703936d01674c69d3d780fc3c24e14cde55bbf89c7ae1db45.scope: Deactivated successfully.
Nov 29 02:15:51 np0005539552 podman[80415]: 2025-11-29 07:15:51.872369213 +0000 UTC m=+0.046232537 container create e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:15:51 np0005539552 podman[80415]: 2025-11-29 07:15:51.846998327 +0000 UTC m=+0.020861661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:51 np0005539552 systemd[1]: Started libpod-conmon-e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7.scope.
Nov 29 02:15:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:51 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:51 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb94e12f0cd1818fa8dd95b4b6b8386559cb45a4d2de9c53d1868afaea344854/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:51 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb94e12f0cd1818fa8dd95b4b6b8386559cb45a4d2de9c53d1868afaea344854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:51 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb94e12f0cd1818fa8dd95b4b6b8386559cb45a4d2de9c53d1868afaea344854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:51 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb94e12f0cd1818fa8dd95b4b6b8386559cb45a4d2de9c53d1868afaea344854/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:52 np0005539552 podman[80415]: 2025-11-29 07:15:52.404468348 +0000 UTC m=+0.578331692 container init e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:15:52 np0005539552 podman[80415]: 2025-11-29 07:15:52.410855494 +0000 UTC m=+0.584718818 container start e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:52 np0005539552 podman[80415]: 2025-11-29 07:15:52.705196557 +0000 UTC m=+0.879059971 container attach e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 02:15:52 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 02:15:52 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]: {
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:    "681bc90e-5cd2-4106-9be9-9995623a17e0": {
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:        "osd_id": 2,
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:        "osd_uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0",
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:        "type": "bluestore"
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]:    }
Nov 29 02:15:53 np0005539552 nostalgic_nash[80432]: }
Nov 29 02:15:53 np0005539552 systemd[1]: libpod-e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7.scope: Deactivated successfully.
Nov 29 02:15:53 np0005539552 podman[80415]: 2025-11-29 07:15:53.310856155 +0000 UTC m=+1.484719519 container died e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:15:53 np0005539552 ceph-mon[77121]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:15:53 np0005539552 systemd[1]: var-lib-containers-storage-overlay-bb94e12f0cd1818fa8dd95b4b6b8386559cb45a4d2de9c53d1868afaea344854-merged.mount: Deactivated successfully.
Nov 29 02:15:54 np0005539552 podman[80415]: 2025-11-29 07:15:54.136717378 +0000 UTC m=+2.310580712 container remove e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:15:54 np0005539552 systemd[1]: libpod-conmon-e2c0967ea2b608f669e67c95a5c0461a78d71d0eb91fea729089a389fb8eebf7.scope: Deactivated successfully.
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0 done with init, starting boot process
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0 start_boot
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 02:15:54 np0005539552 ceph-osd[79800]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539552 ceph-mon[77121]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:15:55 np0005539552 podman[80692]: 2025-11-29 07:15:55.641942715 +0000 UTC m=+0.198749763 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:15:55 np0005539552 podman[80692]: 2025-11-29 07:15:55.961142362 +0000 UTC m=+0.517949390 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:15:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539552 systemd[1]: session-20.scope: Deactivated successfully.
Nov 29 02:15:58 np0005539552 systemd[1]: session-20.scope: Consumed 8.415s CPU time.
Nov 29 02:15:58 np0005539552 systemd-logind[788]: Session 20 logged out. Waiting for processes to exit.
Nov 29 02:15:58 np0005539552 systemd-logind[788]: Removed session 20.
Nov 29 02:15:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.498921249 +0000 UTC m=+0.055477886 container create eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.465751971 +0000 UTC m=+0.022308628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:58 np0005539552 systemd[1]: Started libpod-conmon-eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04.scope.
Nov 29 02:15:58 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.604209513 +0000 UTC m=+0.160766150 container init eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.610819284 +0000 UTC m=+0.167375921 container start eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:15:58 np0005539552 gracious_wright[81066]: 167 167
Nov 29 02:15:58 np0005539552 systemd[1]: libpod-eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04.scope: Deactivated successfully.
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.631611151 +0000 UTC m=+0.188167798 container attach eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.632282419 +0000 UTC m=+0.188839056 container died eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:15:58 np0005539552 systemd[1]: var-lib-containers-storage-overlay-db9b0ade6dae9c7e7680a755e8cd88c3a2578029a6253d6c92978c36b4089345-merged.mount: Deactivated successfully.
Nov 29 02:15:58 np0005539552 podman[81050]: 2025-11-29 07:15:58.872924614 +0000 UTC m=+0.429481261 container remove eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:15:58 np0005539552 systemd[1]: libpod-conmon-eaa98f4f05ee5a13da195b9639e4680b5504b48bcd7f4e6579dd375ffa926a04.scope: Deactivated successfully.
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e38 crush map has features 3314933000854323200, adjusting msgr requires
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539552 podman[81090]: 2025-11-29 07:15:59.038051655 +0000 UTC m=+0.054656414 container create 507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:15:59 np0005539552 podman[81090]: 2025-11-29 07:15:59.004007005 +0000 UTC m=+0.020611784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:59 np0005539552 systemd[1]: Started libpod-conmon-507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871.scope.
Nov 29 02:15:59 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:15:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527ba166db4a01fe64a0c37febc9d83ec7c45d81e59808d69122e8f317773f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527ba166db4a01fe64a0c37febc9d83ec7c45d81e59808d69122e8f317773f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527ba166db4a01fe64a0c37febc9d83ec7c45d81e59808d69122e8f317773f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527ba166db4a01fe64a0c37febc9d83ec7c45d81e59808d69122e8f317773f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]: dispatch
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]': finished
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539552 podman[81090]: 2025-11-29 07:15:59.469928517 +0000 UTC m=+0.486533326 container init 507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:59 np0005539552 podman[81090]: 2025-11-29 07:15:59.479408723 +0000 UTC m=+0.496013502 container start 507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:15:59 np0005539552 podman[81090]: 2025-11-29 07:15:59.681044988 +0000 UTC m=+0.697649747 container attach 507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]: [
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:    {
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "available": false,
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "ceph_device": false,
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "lsm_data": {},
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "lvs": [],
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "path": "/dev/sr0",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "rejected_reasons": [
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "Insufficient space (<5GB)",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "Has a FileSystem"
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        ],
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        "sys_api": {
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "actuators": null,
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "device_nodes": "sr0",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "devname": "sr0",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "human_readable_size": "482.00 KB",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "id_bus": "ata",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "model": "QEMU DVD-ROM",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "nr_requests": "2",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "parent": "/dev/sr0",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "partitions": {},
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "path": "/dev/sr0",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "removable": "1",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "rev": "2.5+",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "ro": "0",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "rotational": "1",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "sas_address": "",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "sas_device_handle": "",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "scheduler_mode": "mq-deadline",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "sectors": 0,
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "sectorsize": "2048",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "size": 493568.0,
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "support_discard": "2048",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "type": "disk",
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:            "vendor": "QEMU"
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:        }
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]:    }
Nov 29 02:16:00 np0005539552 hopeful_banach[81106]: ]
Nov 29 02:16:00 np0005539552 systemd[1]: libpod-507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871.scope: Deactivated successfully.
Nov 29 02:16:00 np0005539552 podman[81090]: 2025-11-29 07:16:00.656707117 +0000 UTC m=+1.673311886 container died 507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:16:00 np0005539552 systemd[1]: libpod-507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871.scope: Consumed 1.175s CPU time.
Nov 29 02:16:01 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f527ba166db4a01fe64a0c37febc9d83ec7c45d81e59808d69122e8f317773f7-merged.mount: Deactivated successfully.
Nov 29 02:16:01 np0005539552 podman[81090]: 2025-11-29 07:16:01.602047692 +0000 UTC m=+2.618652441 container remove 507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:01 np0005539552 systemd[1]: libpod-conmon-507c4ffc68a63f109f0e6c2fa961976b504b06b2c84ebc3dcc827ff034584871.scope: Deactivated successfully.
Nov 29 02:16:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 02:16:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:01 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/3782324994' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: Adjusting osd_memory_target on compute-2 to 127.9M
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e40 e40: 3 total, 2 up, 3 in
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 21.624 iops: 5535.864 elapsed_sec: 0.542
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: log_channel(cluster) log [WRN] : OSD bench result of 5535.863781 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 0 waiting for initial osdmap
Nov 29 02:16:03 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2[79796]: 2025-11-29T07:16:03.725+0000 7f9f45a15640 -1 osd.2 0 waiting for initial osdmap
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 crush map has features 432629239337189376 was 288232575208792577, adjusting msgr requires for mons
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 check_osdmap_features require_osd_release unknown -> reef
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:16:03 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-2[79796]: 2025-11-29T07:16:03.760+0000 7f9f4103d640 -1 osd.2 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 set_numa_affinity not setting numa affinity
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 40 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 41 state: booting -> active
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.10( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.14( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.6( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[3.4( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.3( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.2( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.1d( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[3.1b( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[4.19( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539552 ceph-mon[77121]: OSD bench result of 5535.863781 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902] boot
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.0( empty local-lis/les=41/42 n=0 ec=16/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.4( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41) [2] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=0 lpr=42 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.1f( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.15( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.d( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.1a( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.15( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=24/19 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=41) [2] r=0 lpr=42 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 42 pg[3.d( empty local-lis/les=41/42 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=41) [2] r=0 lpr=42 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 02:16:05 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 02:16:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 02:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 02:16:07 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 29 02:16:07 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 29 02:16:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:09 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 29 02:16:09 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 02:16:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198799133s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337612152s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198204041s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337032318s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198974609s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337690353s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198708534s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337612152s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198025703s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336996078s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198733330s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337690353s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198075294s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337032318s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197971344s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336996078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197464943s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336692810s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198425293s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337657928s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197434425s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336692810s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198396683s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337657928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198269844s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337591171s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.198246002s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337591171s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197703362s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337211609s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197012901s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336528778s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197678566s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337211609s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196991920s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336528778s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197316170s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336877823s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197797775s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337444305s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197291374s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336877823s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197710037s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337396622s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197766304s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337444305s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197371483s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337085724s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197683334s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337396622s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197347641s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337085724s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197838783s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337663651s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197410583s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337251663s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197653770s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337537766s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197381020s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337251663s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197632790s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337537766s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197135925s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337059021s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197111130s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337059021s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196991920s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336977005s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197793961s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337663651s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196973801s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337030411s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196484566s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336542130s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196929932s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336977005s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196949959s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337030411s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196460724s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336542130s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197092056s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337198257s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197031975s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337198257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196289062s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.336553574s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196255684s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.336553574s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.197001457s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 30.337339401s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=9.196971893s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 30.337339401s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.1d( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.16( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.14( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Nov 29 02:16:13 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Nov 29 02:16:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:16:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.181572871 +0000 UTC m=+0.039206119 container create 8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_taussig, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:16 np0005539552 systemd[1]: Started libpod-conmon-8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e.scope.
Nov 29 02:16:16 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.255172727 +0000 UTC m=+0.112806045 container init 8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_taussig, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.162370352 +0000 UTC m=+0.020003630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.26370181 +0000 UTC m=+0.121335068 container start 8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_taussig, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.268566351 +0000 UTC m=+0.126199669 container attach 8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:16 np0005539552 dazzling_taussig[83083]: 167 167
Nov 29 02:16:16 np0005539552 systemd[1]: libpod-8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e.scope: Deactivated successfully.
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.270888239 +0000 UTC m=+0.128521457 container died 8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:16 np0005539552 systemd[1]: var-lib-containers-storage-overlay-5185f5c445304c670592cd524a99eaa67792ae2effff19b56f8d3e3eb82d57cc-merged.mount: Deactivated successfully.
Nov 29 02:16:16 np0005539552 podman[83067]: 2025-11-29 07:16:16.315376848 +0000 UTC m=+0.173010076 container remove 8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:16:16 np0005539552 systemd[1]: libpod-conmon-8f95f2f6df57450ac33e4fbf4b8376e4b609c007a03fc787dd472a8955442c4e.scope: Deactivated successfully.
Nov 29 02:16:16 np0005539552 systemd[1]: Reloading.
Nov 29 02:16:16 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:16 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:16 np0005539552 systemd[1]: Reloading.
Nov 29 02:16:16 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:16 np0005539552 ceph-mon[77121]: Deploying daemon rgw.rgw.compute-2.gstlru on compute-2
Nov 29 02:16:16 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:16 np0005539552 systemd[1]: Starting Ceph rgw.rgw.compute-2.gstlru for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:16:17 np0005539552 podman[83229]: 2025-11-29 07:16:17.152560276 +0000 UTC m=+0.043901786 container create eb1478c032aef07db01cb40ab53924dfeda0919b41fb22680ca8d15393ee1e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-2-gstlru, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:17 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69a2c450a1c537267491510335308990b01552c938219da44851c3268a81cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:17 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69a2c450a1c537267491510335308990b01552c938219da44851c3268a81cd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:17 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69a2c450a1c537267491510335308990b01552c938219da44851c3268a81cd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:17 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69a2c450a1c537267491510335308990b01552c938219da44851c3268a81cd3/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.gstlru supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:17 np0005539552 podman[83229]: 2025-11-29 07:16:17.133116211 +0000 UTC m=+0.024457741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:17 np0005539552 podman[83229]: 2025-11-29 07:16:17.234360727 +0000 UTC m=+0.125702277 container init eb1478c032aef07db01cb40ab53924dfeda0919b41fb22680ca8d15393ee1e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-2-gstlru, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:16:17 np0005539552 podman[83229]: 2025-11-29 07:16:17.241522416 +0000 UTC m=+0.132863916 container start eb1478c032aef07db01cb40ab53924dfeda0919b41fb22680ca8d15393ee1e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-2-gstlru, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:17 np0005539552 bash[83229]: eb1478c032aef07db01cb40ab53924dfeda0919b41fb22680ca8d15393ee1e96
Nov 29 02:16:17 np0005539552 systemd[1]: Started Ceph rgw.rgw.compute-2.gstlru for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:16:17 np0005539552 radosgw[83248]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:17 np0005539552 radosgw[83248]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 02:16:17 np0005539552 radosgw[83248]: framework: beast
Nov 29 02:16:17 np0005539552 radosgw[83248]: framework conf key: endpoint, val: 192.168.122.102:8082
Nov 29 02:16:17 np0005539552 radosgw[83248]: init_numa not setting numa affinity
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 02:16:19 np0005539552 ceph-mon[77121]: Deploying daemon rgw.rgw.compute-1.wmgqmg on compute-1
Nov 29 02:16:19 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:19 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:19 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:16:20 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 02:16:20 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 02:16:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 02:16:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 02:16:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:21 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 29 02:16:21 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 29 02:16:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 02:16:22 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:22 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:23 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 29 02:16:23 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539552 ceph-mon[77121]: Deploying daemon rgw.rgw.compute-0.lkiqxb on compute-0
Nov 29 02:16:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 02:16:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:16:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:25 np0005539552 ceph-mon[77121]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 29 02:16:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.29348793 +0000 UTC m=+0.038852910 container create 033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:16:26 np0005539552 systemd[1]: Started libpod-conmon-033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397.scope.
Nov 29 02:16:26 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.274436865 +0000 UTC m=+0.019801865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.380594353 +0000 UTC m=+0.125959363 container init 033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.387146777 +0000 UTC m=+0.132511767 container start 033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.390655034 +0000 UTC m=+0.136020044 container attach 033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:26 np0005539552 systemd[1]: libpod-033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397.scope: Deactivated successfully.
Nov 29 02:16:26 np0005539552 heuristic_bose[83464]: 167 167
Nov 29 02:16:26 np0005539552 conmon[83464]: conmon 033b555ff0f1e4182b85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397.scope/container/memory.events
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.394450329 +0000 UTC m=+0.139815309 container died 033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 02:16:26 np0005539552 systemd[1]: var-lib-containers-storage-overlay-10a326e1a3e15b25bbd5bc3434d4532681c13aeec3c98325e4d00a2e4adf5a68-merged.mount: Deactivated successfully.
Nov 29 02:16:26 np0005539552 podman[83447]: 2025-11-29 07:16:26.429593996 +0000 UTC m=+0.174958976 container remove 033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:16:26 np0005539552 systemd[1]: libpod-conmon-033b555ff0f1e4182b85170d264a85093aebd479fea0b1ab7ff94f697fbad397.scope: Deactivated successfully.
Nov 29 02:16:26 np0005539552 systemd[1]: Reloading.
Nov 29 02:16:26 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:26 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.101:0/4041558685' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 02:16:26 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 02:16:26 np0005539552 systemd[1]: Reloading.
Nov 29 02:16:26 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:26 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:27 np0005539552 systemd[1]: Starting Ceph mds.cephfs.compute-2.mmoati for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:16:27 np0005539552 podman[83616]: 2025-11-29 07:16:27.302138845 +0000 UTC m=+0.040915622 container create f22ed9f7c3cf0811f9c8d1be9cc4287e45486002ccac62960dd06f601bf309ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:16:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19b7211c6f061b5525e3b6b609ddb146ca84d5817be41af780ce30faca32ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19b7211c6f061b5525e3b6b609ddb146ca84d5817be41af780ce30faca32ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19b7211c6f061b5525e3b6b609ddb146ca84d5817be41af780ce30faca32ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19b7211c6f061b5525e3b6b609ddb146ca84d5817be41af780ce30faca32ce7/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:27 np0005539552 podman[83616]: 2025-11-29 07:16:27.361149928 +0000 UTC m=+0.099926735 container init f22ed9f7c3cf0811f9c8d1be9cc4287e45486002ccac62960dd06f601bf309ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:16:27 np0005539552 podman[83616]: 2025-11-29 07:16:27.366539062 +0000 UTC m=+0.105315849 container start f22ed9f7c3cf0811f9c8d1be9cc4287e45486002ccac62960dd06f601bf309ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 29 02:16:27 np0005539552 podman[83616]: 2025-11-29 07:16:27.285190052 +0000 UTC m=+0.023966859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: main not setting numa affinity
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: pidfile_write: ignore empty --pid-file
Nov 29 02:16:27 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati[83632]: starting mds.cephfs.compute-2.mmoati at 
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 2 from mon.1
Nov 29 02:16:27 np0005539552 bash[83616]: f22ed9f7c3cf0811f9c8d1be9cc4287e45486002ccac62960dd06f601bf309ad
Nov 29 02:16:27 np0005539552 systemd[1]: Started Ceph mds.cephfs.compute-2.mmoati for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: Deploying daemon mds.cephfs.compute-2.mmoati on compute-2
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e3 new map
Nov 29 02:16:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:15:46.469738+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.mmoati{-1:24154} state up:standby seq 1 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 3 from mon.1
Nov 29 02:16:27 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Monitors have assigned me to become a standby.
Nov 29 02:16:27 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 02:16:27 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 02:16:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e4 new map
Nov 29 02:16:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:27.688749+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 4 from mon.1
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x1
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x100
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x600
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x601
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x602
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x603
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x604
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x605
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x606
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x607
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x608
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x609
Nov 29 02:16:28 np0005539552 ceph-mds[83636]: mds.0.4 creating_done
Nov 29 02:16:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: daemon mds.cephfs.compute-2.mmoati assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: daemon mds.cephfs.compute-2.mmoati is now active in filesystem cephfs as rank 0
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: Deploying daemon mds.cephfs.compute-0.qcwnhf on compute-0
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e5 new map
Nov 29 02:16:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:16:29 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 5 from mon.1
Nov 29 02:16:29 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 02:16:29 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 02:16:29 np0005539552 ceph-mds[83636]: mds.0.4 recovery_done -- successful recovery!
Nov 29 02:16:29 np0005539552 ceph-mds[83636]: mds.0.4 active_start
Nov 29 02:16:29 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 29 02:16:29 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Nov 29 02:16:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Nov 29 02:16:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e6 new map
Nov 29 02:16:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e7 new map
Nov 29 02:16:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:32 np0005539552 ceph-mon[77121]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539552 ceph-mon[77121]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:32 np0005539552 radosgw[83248]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:16:32 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-2-gstlru[83244]: 2025-11-29T07:16:32.915+0000 7fedc38bb940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:16:32 np0005539552 radosgw[83248]: framework: beast
Nov 29 02:16:32 np0005539552 radosgw[83248]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 02:16:32 np0005539552 radosgw[83248]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 02:16:32 np0005539552 radosgw[83248]: starting handler: beast
Nov 29 02:16:32 np0005539552 radosgw[83248]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:33 np0005539552 radosgw[83248]: mgrc service_daemon_register rgw.24148 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.gstlru,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=dee0d062-b680-4e01-91de-db2466760c83,zone_name=default,zonegroup_id=135cb529-a24d-489d-8cfe-9e045cfed63b,zonegroup_name=default}
Nov 29 02:16:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:33 np0005539552 ceph-mon[77121]: Deploying daemon mds.cephfs.compute-1.ldsugj on compute-1
Nov 29 02:16:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:33 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 29 02:16:33 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 29 02:16:34 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Nov 29 02:16:34 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Nov 29 02:16:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:39 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 29 02:16:39 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 29 02:16:40 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 02:16:40 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 02:16:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e8 new map
Nov 29 02:16:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:42 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:47 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 29 02:16:47 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 29 02:16:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e9 new map
Nov 29 02:16:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:48 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 29 02:16:48 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 29 02:16:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e10 new map
Nov 29 02:16:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01110#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:47.848009+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:replay seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 10 from mon.1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Map removed me [mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}] from cluster; respawning! See cluster/monitor logs for details.
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati respawn!
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command assert hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command abort hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command leak_some_memory hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perfcounters_dump hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command 1 hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perf dump hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perfcounters_schema hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perf histogram dump hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command 2 hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perf schema hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command counter dump hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command counter schema hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perf histogram schema hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command perf reset hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config show hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config help hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config set hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config unset hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config get hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config diff hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command config diff get hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command injectargs hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command log flush hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command log dump hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command log reopen hook 0x55d353feacc0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump_mempools hook 0x55d354020328
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: build_initial for_mkfs: 0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: none
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati/keyring
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding auth protocol: none
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b4f7b0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati/keyring
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati/keyring
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command rotate-key hook 0x7ffed0b4f8f8
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: monclient: found mon.noname-c
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: monclient: authenticate success, global_id 24151
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: set_mon_vals no callback set
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) unregister_commands rotate-key
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: main not setting numa affinity
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: pidfile_write: ignore empty --pid-file
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) init /var/run/ceph/ceph-mds.cephfs.compute-2.mmoati.asok
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) bind_and_listen /var/run/ceph/ceph-mds.cephfs.compute-2.mmoati.asok
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command 0 hook 0x55d3540413d8
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command version hook 0x55d3540413d8
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command git_version hook 0x55d3540413d8
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command help hook 0x55d353feac40
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command get_command_descriptions hook 0x55d353feac50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command raise hook 0x55d354024870
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding auth protocol: none
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x55d354d940d0) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) entry start
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati/keyring
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: build_initial for_mkfs: 0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding auth protocol: cephx
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding auth protocol: none
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: crc
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: AuthRegistry(0x7ffed0b50b30) adding con mode: secure
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati/keyring
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.mmoati/keyring
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command rotate-key hook 0x7ffed0b50c78
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: monclient: found mon.noname-c
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: monclient: authenticate success, global_id 24154
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: set_mon_vals no callback set
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command status hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump_ops_in_flight hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command ops hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump_blocked_ops hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump_blocked_ops_count hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump_historic_ops hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump_historic_ops_by_duration hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command scrub_path hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command scrub start hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command scrub abort hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command scrub pause hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command scrub resume hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command scrub status hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command tag path hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command flush_path hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command export dir hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump cache hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command cache drop hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command lock path hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command cache status hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump tree hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump loads hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump snaps hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command session ls hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command client ls hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command session evict hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command client evict hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command session kill hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command session config hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command client config hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command damage ls hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command damage rm hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command osdmap barrier hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command flush journal hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command force_readonly hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command get subtrees hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dirfrag split hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dirfrag merge hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dirfrag ls hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command openfiles ls hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump inode hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command dump dir hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command exit hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command respawn hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command heap hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command cpu_profiler hook 0x55d353febd50
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 2 from mon.1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:boot seq 1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 3 from mon.1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Monitors have assigned me to become a standby.
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati set_want_state: up:boot -> up:standby
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati received beacon reply up:boot seq 1 rtt 0.288007
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mgrc handle_mgr_map Got map version 11
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mgrc handle_mgr_configure stats_period=5
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mgrc handle_mgr_configure updated stats threshold: 5
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 4 from mon.1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.purge_queue operator():  data pool 7 not found in OSDMap
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: asok(0x55d35406e000) register_command objecter_requests hook 0x55d353febf80
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.purge_queue operator():  data pool 7 not found in OSDMap
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.0 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati set_want_state: up:standby -> up:creating
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 boot_create
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.log create empty log
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.journaler.mdlog(ro) set_writeable
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.journaler.mdlog(rw) created blank journal at inode 0x0x200, format=1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 boot_create creating fresh hierarchy
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 boot_create creating mydir hierarchy
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x100
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x600
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x601
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x602
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x603
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x604
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x605
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x606
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x607
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x608
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache creating system inode with ino:0x609
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 boot_create creating global snaprealm
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.purge_queue create: creating
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.journaler.pq(ro) set_writeable
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.journaler.pq(rw) created blank journal at inode 0x0x500, format=1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.log _submit_thread 4194304~872 : ESubtreeMap 2 subtrees , 0 ambiguous [metablob 0x1, 2 dirs]
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 251344, rss 39508, heap 190740, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 creating_done
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 request_state up:active
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati set_want_state: up:creating -> up:active
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:active seq 2
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 251344, rss 39980, heap 190740, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 5 from mon.1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 recovery_done -- successful recovery!
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 active_start
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 set_osd_epoch_barrier: epoch=54
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati received beacon reply up:active seq 2 rtt 1.11703
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40212, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40220, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40228, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:active seq 3
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati received beacon reply up:active seq 3 rtt 0.0300007
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40508, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40512, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40520, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40524, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:active seq 4
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40532, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40400, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40404, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40412, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati received beacon reply up:active seq 4 rtt 3.73809
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:active seq 5
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati received beacon reply up:active seq 5 rtt 0.193005
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40420, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40428, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40432, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40436, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:active seq 6
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati received beacon reply up:active seq 6 rtt 0.680017
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40444, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: set_mon_vals no callback set
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40468, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40464, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache Memory usage:  total 267736, rss 40468, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati Sending beacon up:active seq 7
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 10 from mon.1
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Map removed me [mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}] from cluster; respawning! See cluster/monitor logs for details.
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati respawn!
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  e: '/usr/bin/ceph-mds'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  0: '/usr/bin/ceph-mds'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  1: '-n'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  2: 'mds.cephfs.compute-2.mmoati'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  3: '-f'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  4: '--setuser'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  5: 'ceph'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  6: '--setgroup'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  7: 'ceph'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  8: '--default-log-to-file=false'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  9: '--default-log-to-journald=true'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  10: '--default-log-to-stderr=false'
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati respawning with exe /usr/bin/ceph-mds
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati  exe_path /proc/self/exe
Nov 29 02:16:48 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati[83632]: ignoring --setuser ceph since I am not root
Nov 29 02:16:48 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati[83632]: ignoring --setgroup ceph since I am not root
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: main not setting numa affinity
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: pidfile_write: ignore empty --pid-file
Nov 29 02:16:48 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-2-mmoati[83632]: starting mds.cephfs.compute-2.mmoati at 
Nov 29 02:16:48 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 10 from mon.1
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: Dropping low affinity active daemon mds.cephfs.compute-2.mmoati in favor of higher affinity standby.
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: Replacing daemon mds.cephfs.compute-2.mmoati as rank 0 with standby daemon mds.cephfs.compute-0.qcwnhf
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 29 02:16:49 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e11 new map
Nov 29 02:16:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e11 print_map#012e11#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01111#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:49.781453+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:reconnect seq 6 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:49 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Updating MDS map to version 11 from mon.1
Nov 29 02:16:49 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Monitors have assigned me to become a standby.
Nov 29 02:16:51 np0005539552 ceph-mon[77121]: Deploying daemon haproxy.rgw.default.compute-0.uyfjya on compute-0
Nov 29 02:16:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e12 new map
Nov 29 02:16:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e12 print_map#012e12#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:50.823341+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:rejoin seq 7 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e13 new map
Nov 29 02:16:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).mds e13 print_map#012e13#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:53.325309+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:active seq 8 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:53 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 29 02:16:53 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 29 02:16:55 np0005539552 ceph-mon[77121]: daemon mds.cephfs.compute-0.qcwnhf is now active in filesystem cephfs as rank 0
Nov 29 02:16:55 np0005539552 ceph-mon[77121]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Nov 29 02:16:55 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:16:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:58 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 29 02:16:58 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 29 02:17:02 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Nov 29 02:17:02 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Nov 29 02:17:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:03 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 02:17:03 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 02:17:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000050s ======
Nov 29 02:17:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 29 02:17:05 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 29 02:17:05 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 29 02:17:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 02:17:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 02:17:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:08 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 29 02:17:08 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 29 02:17:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:17:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:09.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.215339398 +0000 UTC m=+2.521382369 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.231792838 +0000 UTC m=+2.537835799 container create ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d (image=quay.io/ceph/haproxy:2.3, name=confident_heyrovsky)
Nov 29 02:17:10 np0005539552 systemd[1]: Started libpod-conmon-ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d.scope.
Nov 29 02:17:10 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.325898816 +0000 UTC m=+2.631941797 container init ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d (image=quay.io/ceph/haproxy:2.3, name=confident_heyrovsky)
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.334313276 +0000 UTC m=+2.640356247 container start ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d (image=quay.io/ceph/haproxy:2.3, name=confident_heyrovsky)
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.338213403 +0000 UTC m=+2.644256364 container attach ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d (image=quay.io/ceph/haproxy:2.3, name=confident_heyrovsky)
Nov 29 02:17:10 np0005539552 confident_heyrovsky[84483]: 0 0
Nov 29 02:17:10 np0005539552 systemd[1]: libpod-ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d.scope: Deactivated successfully.
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.341313031 +0000 UTC m=+2.647356022 container died ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d (image=quay.io/ceph/haproxy:2.3, name=confident_heyrovsky)
Nov 29 02:17:10 np0005539552 systemd[1]: var-lib-containers-storage-overlay-b7b97c0d86efec394f08b8cbe90f73b15ccc94324594a8bc1b50f7c408f40a21-merged.mount: Deactivated successfully.
Nov 29 02:17:10 np0005539552 podman[84369]: 2025-11-29 07:17:10.38778947 +0000 UTC m=+2.693832471 container remove ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d (image=quay.io/ceph/haproxy:2.3, name=confident_heyrovsky)
Nov 29 02:17:10 np0005539552 systemd[1]: libpod-conmon-ce3317ad0e84a0151c7d726fcb33de6525d4caed51fd4cd796535f46bd1aa50d.scope: Deactivated successfully.
Nov 29 02:17:10 np0005539552 systemd[1]: Reloading.
Nov 29 02:17:10 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:17:10 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:17:10 np0005539552 systemd[1]: Reloading.
Nov 29 02:17:10 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:17:10 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:17:10 np0005539552 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.efzvmt for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:17:11 np0005539552 podman[84628]: 2025-11-29 07:17:11.171526917 +0000 UTC m=+0.022614744 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 02:17:11 np0005539552 podman[84628]: 2025-11-29 07:17:11.293895264 +0000 UTC m=+0.144983101 container create fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:17:11 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca5a385127acd185bbb1e59c2d239ddfeff2d5c576c67692da8f56dfa173fe9/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:11 np0005539552 podman[84628]: 2025-11-29 07:17:11.68359113 +0000 UTC m=+0.534679027 container init fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:17:11 np0005539552 podman[84628]: 2025-11-29 07:17:11.692783039 +0000 UTC m=+0.543870846 container start fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:17:11 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt[84643]: [NOTICE] 332/071711 (2) : New worker #1 (4) forked
Nov 29 02:17:11 np0005539552 bash[84628]: fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005
Nov 29 02:17:11 np0005539552 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.efzvmt for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:17:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: Deploying daemon haproxy.rgw.default.compute-2.efzvmt on compute-2
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:13 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 29 02:17:13 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 29 02:17:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 02:17:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:13.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:14 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 29 02:17:14 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 29 02:17:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:15.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:15 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 29 02:17:15 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 29 02:17:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:15.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:17 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Nov 29 02:17:17 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Nov 29 02:17:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:17.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:18 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 02:17:18 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 02:17:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:20 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 29 02:17:20 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:21.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 02:17:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 02:17:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:27.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:27 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 29 02:17:27 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 29 02:17:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:28 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:17:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:29 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 02:17:29 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 02:17:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 29 02:17:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 29 02:17:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:31 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 29 02:17:31 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 29 02:17:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:32 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:17:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 1..530) lease_timeout -- calling new election
Nov 29 02:17:33 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:17:33 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(16) init, last seen epoch 16
Nov 29 02:17:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:33.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:35.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:17:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:39.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:40 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:17:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:41 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Nov 29 02:17:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 handle_timecheck drop unexpected msg
Nov 29 02:17:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:41.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:42 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 29 02:17:42 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 29 02:17:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-1 calling monitor election
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:44 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:17:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:45.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:47 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 29 02:17:47 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 29 02:17:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:47.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:49.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:50 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 02:17:50 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 02:17:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:51.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:53.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:53.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.1( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.12( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.11( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.10( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.4( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[10.3( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:55.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.17( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.16( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.15( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.3( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.a( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.9( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.a( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.d( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.8( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.c( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.3( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.b( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.6( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.19( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.1c( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.5( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.2( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[11.13( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.1f( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.11( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.f( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 64 pg[8.16( empty local-lis/les=0/0 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:55.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.17( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=64/65 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.e( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.16( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.f( v 60'49 (0'0,60'49] local-lis/les=64/65 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.13( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.12( v 60'49 (0'0,60'49] local-lis/les=64/65 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.10( v 60'49 (0'0,60'49] local-lis/les=64/65 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.1e( v 60'49 (0'0,60'49] local-lis/les=64/65 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.4( v 60'49 (0'0,60'49] local-lis/les=64/65 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.11( v 60'49 (0'0,60'49] local-lis/les=64/65 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=64/65 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.8( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.a( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.1( v 60'49 (0'0,60'49] local-lis/les=64/65 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.3( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=64/65 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64) [2] r=0 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[11.19( empty local-lis/les=64/65 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 65 pg[10.3( v 63'54 lc 63'53 (0'0,63'54] local-lis/les=64/65 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [2] r=0 lpr=64 pi=[61,64)/1 crt=63'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:57.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:57 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 29 02:17:57 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 29 02:17:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:58 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 02:17:58 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 02:17:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:17:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 02:17:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:17:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: Deploying daemon keepalived.rgw.default.compute-2.gntzbr on compute-2
Nov 29 02:18:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:18:01 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 02:18:01 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 02:18:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67) [2] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.154435549 +0000 UTC m=+4.721110275 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 02:18:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:04.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.178014798 +0000 UTC m=+4.744689494 container create 6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hodgkin, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, name=keepalived)
Nov 29 02:18:04 np0005539552 systemd[1]: Started libpod-conmon-6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804.scope.
Nov 29 02:18:04 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.394147358 +0000 UTC m=+4.960822084 container init 6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hodgkin, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, io.openshift.expose-services=, description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.401650592 +0000 UTC m=+4.968325288 container start 6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hodgkin, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, architecture=x86_64, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph)
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.405351952 +0000 UTC m=+4.972026678 container attach 6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hodgkin, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:18:04 np0005539552 upbeat_hodgkin[84894]: 0 0
Nov 29 02:18:04 np0005539552 systemd[1]: libpod-6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804.scope: Deactivated successfully.
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.410999265 +0000 UTC m=+4.977673981 container died 6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hodgkin, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, release=1793, description=keepalived for Ceph, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 02:18:04 np0005539552 systemd[1]: var-lib-containers-storage-overlay-bfc3d2eb547a631e5ab20aeff9b96591d97809e5a5c5014f42278f89e4f12310-merged.mount: Deactivated successfully.
Nov 29 02:18:04 np0005539552 podman[84797]: 2025-11-29 07:18:04.668771704 +0000 UTC m=+5.235446420 container remove 6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804 (image=quay.io/ceph/keepalived:2.2.4, name=upbeat_hodgkin, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, description=keepalived for Ceph, distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 02:18:04 np0005539552 systemd[1]: libpod-conmon-6738b2f1b03566dd2df4bf214576b47395be7331f0275137cca1b55eb3b6a804.scope: Deactivated successfully.
Nov 29 02:18:04 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 29 02:18:04 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 29 02:18:04 np0005539552 systemd[1]: Reloading.
Nov 29 02:18:04 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:18:04 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:18:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:05 np0005539552 systemd[1]: Reloading.
Nov 29 02:18:05 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:18:05 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:18:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 02:18:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:05 np0005539552 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.gntzbr for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:18:05 np0005539552 podman[85037]: 2025-11-29 07:18:05.707104395 +0000 UTC m=+0.025045220 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 02:18:05 np0005539552 podman[85037]: 2025-11-29 07:18:05.944160662 +0000 UTC m=+0.262101477 container create 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.expose-services=, release=1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Nov 29 02:18:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b442776109f41ad2418f9a5bbbabbe82fe6481b514870a5a1405e93ba22028/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:06 np0005539552 podman[85037]: 2025-11-29 07:18:06.332015757 +0000 UTC m=+0.649956622 container init 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 02:18:06 np0005539552 podman[85037]: 2025-11-29 07:18:06.33802634 +0000 UTC m=+0.655967155 container start 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, distribution-scope=public, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: Starting VRRP child process, pid=4
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: Startup complete
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: (VI_0) Entering BACKUP STATE (init)
Nov 29 02:18:06 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:06 2025: VRRP_Script(check_backend) succeeded
Nov 29 02:18:06 np0005539552 bash[85037]: 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6
Nov 29 02:18:06 np0005539552 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.gntzbr for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:18:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 02:18:07 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 02:18:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:08.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:10 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:10 2025: (VI_0) Entering MASTER STATE
Nov 29 02:18:10 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:10 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Nov 29 02:18:10 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr[85055]: Sat Nov 29 07:18:10 2025: (VI_0) Entering BACKUP STATE
Nov 29 02:18:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 02:18:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 02:18:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 71 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:18.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 02:18:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:18:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:19 np0005539552 podman[85342]: 2025-11-29 07:18:19.852522811 +0000 UTC m=+0.128938908 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:18:19 np0005539552 podman[85342]: 2025-11-29 07:18:19.956567603 +0000 UTC m=+0.232983680 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:18:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:18:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:18:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:20 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 02:18:20 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 02:18:21 np0005539552 podman[85500]: 2025-11-29 07:18:21.116910285 +0000 UTC m=+0.515545614 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:18:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:21 np0005539552 podman[85500]: 2025-11-29 07:18:21.134002722 +0000 UTC m=+0.532638021 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:18:21 np0005539552 podman[85567]: 2025-11-29 07:18:21.423010999 +0000 UTC m=+0.108418830 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Nov 29 02:18:21 np0005539552 podman[85567]: 2025-11-29 07:18:21.436054418 +0000 UTC m=+0.121462219 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, vendor=Red Hat, Inc., io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, release=1793, vcs-type=git, version=2.2.4)
Nov 29 02:18:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:21 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:22.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2] r=0 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2] r=0 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2] r=0 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 73 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2] r=0 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:18:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 29 02:18:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 29 02:18:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:27 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 74 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:28.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 02:18:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:30.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 29 02:18:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:30 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 76 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:31.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:33 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 29 02:18:33 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 29 02:18:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:36 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 29 02:18:36 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 29 02:18:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:18:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:37.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 02:18:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:18:37 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 77 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:37 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 77 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:37 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 77 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:37 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 77 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:39.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:18:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 02:18:39 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 29 02:18:39 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 29 02:18:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:40.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 02:18:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:42 np0005539552 systemd-logind[788]: New session 34 of user zuul.
Nov 29 02:18:42 np0005539552 systemd[1]: Started Session 34 of User zuul.
Nov 29 02:18:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 02:18:42 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:18:42 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:43 np0005539552 python3.9[85946]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:18:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:43 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 29 02:18:43 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 29 02:18:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 02:18:44 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2] r=0 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:44 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2] r=0 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:18:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:18:44 np0005539552 ceph-mon[77121]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:18:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:18:44 np0005539552 ceph-mon[77121]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:18:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:44 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 29 02:18:44 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 29 02:18:44 np0005539552 python3.9[86211]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:18:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 02:18:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:18:45 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 82 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:45 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 82 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:45 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 82 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:45 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 82 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:45 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 29 02:18:45 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 29 02:18:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:48 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 29 02:18:48 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 29 02:18:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:49.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:18:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:51.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 02:18:51 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 29 02:18:51 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 29 02:18:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:53.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:54.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 02:18:55 np0005539552 ceph-mon[77121]: Reconfiguring mgr.compute-0.pdhsqi (monmap changed)...
Nov 29 02:18:55 np0005539552 ceph-mon[77121]: Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:18:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 84 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 84 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:55 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 29 02:18:55 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 29 02:18:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 02:18:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 85 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=7 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 85 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=7 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:56 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 85 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=84/85 n=5 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:18:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:56 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 02:18:56 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 02:18:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:57 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Nov 29 02:18:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 02:18:57 np0005539552 ceph-mon[77121]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:18:57 np0005539552 ceph-mon[77121]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:18:57 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Nov 29 02:18:57 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 86 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=85/86 n=7 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539552 ceph-mon[77121]: Reconfiguring osd.0 (monmap changed)...
Nov 29 02:18:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:18:58 np0005539552 ceph-mon[77121]: Reconfiguring daemon osd.0 on compute-0
Nov 29 02:18:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:18:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:59 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 29 02:18:59 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 29 02:18:59 np0005539552 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 02:18:59 np0005539552 systemd[1]: session-34.scope: Consumed 9.253s CPU time.
Nov 29 02:18:59 np0005539552 systemd-logind[788]: Session 34 logged out. Waiting for processes to exit.
Nov 29 02:18:59 np0005539552 systemd-logind[788]: Removed session 34.
Nov 29 02:19:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:00 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Nov 29 02:19:00 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Nov 29 02:19:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:02.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:02 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Nov 29 02:19:02 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Nov 29 02:19:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:03 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 29 02:19:03 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 29 02:19:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:04.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:04 np0005539552 ceph-mon[77121]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:19:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:19:04 np0005539552 ceph-mon[77121]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:19:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 02:19:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 87 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:04 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 87 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:19:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 02:19:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 88 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 88 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 88 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 88 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:19:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 02:19:07 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 29 02:19:07 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 29 02:19:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:08 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 02:19:08 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 02:19:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 02:19:10 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 90 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=90) [2] r=0 lpr=90 pi=[59,90)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:10 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 90 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=90) [2] r=0 lpr=90 pi=[59,90)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:11.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:12 np0005539552 ceph-mon[77121]: Reconfiguring osd.1 (monmap changed)...
Nov 29 02:19:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:19:12 np0005539552 ceph-mon[77121]: Reconfiguring daemon osd.1 on compute-1
Nov 29 02:19:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:19:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:19:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 02:19:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 91 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=7 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=91) [2] r=0 lpr=91 pi=[59,91)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:13 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 91 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=7 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=91) [2] r=0 lpr=91 pi=[59,91)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:14.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:19:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:19:14 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 29 02:19:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 02:19:14 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 29 02:19:15 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 92 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=91/92 n=7 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=91) [2] r=0 lpr=91 pi=[59,91)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:15 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 92 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=90/92 n=5 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=90) [2] r=0 lpr=90 pi=[59,90)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:15.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:16 np0005539552 systemd-logind[788]: New session 35 of user zuul.
Nov 29 02:19:16 np0005539552 systemd[1]: Started Session 35 of User zuul.
Nov 29 02:19:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 02:19:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:17.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:17 np0005539552 python3.9[86488]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:19:18 np0005539552 ceph-mon[77121]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:19:18 np0005539552 ceph-mon[77121]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:19:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:19 np0005539552 python3.9[86663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:19.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:19 np0005539552 ceph-mon[77121]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:19:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:19 np0005539552 ceph-mon[77121]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:19:19 np0005539552 podman[86783]: 2025-11-29 07:19:19.242923466 +0000 UTC m=+0.050492411 container create 9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:19:19 np0005539552 systemd[72536]: Created slice User Background Tasks Slice.
Nov 29 02:19:19 np0005539552 systemd[72536]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 02:19:19 np0005539552 systemd[72536]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 02:19:19 np0005539552 podman[86783]: 2025-11-29 07:19:19.221829002 +0000 UTC m=+0.029397937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:19 np0005539552 systemd[1]: Started libpod-conmon-9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56.scope.
Nov 29 02:19:19 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:19:19 np0005539552 podman[86783]: 2025-11-29 07:19:19.829280763 +0000 UTC m=+0.636849718 container init 9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feynman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:19:19 np0005539552 podman[86783]: 2025-11-29 07:19:19.839041544 +0000 UTC m=+0.646610479 container start 9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:19:19 np0005539552 vibrant_feynman[86875]: 167 167
Nov 29 02:19:19 np0005539552 systemd[1]: libpod-9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56.scope: Deactivated successfully.
Nov 29 02:19:19 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 29 02:19:19 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 29 02:19:20 np0005539552 python3.9[87018]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:20.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:20 np0005539552 podman[86783]: 2025-11-29 07:19:20.797610141 +0000 UTC m=+1.605179096 container attach 9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feynman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:20 np0005539552 podman[86783]: 2025-11-29 07:19:20.799983506 +0000 UTC m=+1.607552461 container died 9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feynman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:19:21 np0005539552 systemd[1]: var-lib-containers-storage-overlay-ad4287a052c03406e50e39f32d4bed88a647e83d883e912e00f08c6afc0df769-merged.mount: Deactivated successfully.
Nov 29 02:19:21 np0005539552 podman[86783]: 2025-11-29 07:19:21.04392279 +0000 UTC m=+1.851491715 container remove 9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:19:21 np0005539552 systemd[1]: libpod-conmon-9061076629de112b15cf65b9e515d0b673999ebcf145e74daed39865d2bd0a56.scope: Deactivated successfully.
Nov 29 02:19:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:21.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:21 np0005539552 python3.9[87175]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:19:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:22.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539552 python3.9[87429]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:19:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 94 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=94 pruub=11.200847626s) [1] r=-1 lpr=94 pi=[76,94)/1 crt=55'1153 mlcod 0'0 active pruub 221.984344482s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 94 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=94 pruub=11.200755119s) [1] r=-1 lpr=94 pi=[76,94)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 221.984344482s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 94 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=94 pruub=11.200127602s) [1] r=-1 lpr=94 pi=[76,94)/1 crt=55'1153 mlcod 0'0 active pruub 221.984298706s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 94 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=94 pruub=11.200098038s) [1] r=-1 lpr=94 pi=[76,94)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 221.984298706s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 95 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=0 lpr=95 pi=[76,95)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 95 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=0 lpr=95 pi=[76,95)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 95 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=0 lpr=95 pi=[76,95)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:22 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 95 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] r=0 lpr=95 pi=[76,95)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:22 np0005539552 podman[87528]: 2025-11-29 07:19:22.755739679 +0000 UTC m=+0.084026781 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:19:22 np0005539552 podman[87528]: 2025-11-29 07:19:22.888588004 +0000 UTC m=+0.216875116 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:19:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:23 np0005539552 python3.9[87742]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:19:23 np0005539552 podman[87833]: 2025-11-29 07:19:23.582672736 +0000 UTC m=+0.075355643 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:19:23 np0005539552 podman[87833]: 2025-11-29 07:19:23.623238045 +0000 UTC m=+0.115920932 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:19:23 np0005539552 podman[87949]: 2025-11-29 07:19:23.87556575 +0000 UTC m=+0.057080773 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., release=1793, distribution-scope=public)
Nov 29 02:19:23 np0005539552 podman[87949]: 2025-11-29 07:19:23.91649512 +0000 UTC m=+0.098010143 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 02:19:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:19:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 02:19:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:24 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 96 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=95/96 n=6 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] async=[1] r=0 lpr=95 pi=[76,95)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:24 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 96 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=95/96 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=95) [1]/[2] async=[1] r=0 lpr=95 pi=[76,95)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:24 np0005539552 python3.9[88056]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:19:24 np0005539552 network[88073]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:19:24 np0005539552 network[88074]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:19:24 np0005539552 network[88075]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:19:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:24.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 02:19:25 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:19:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 02:19:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:26.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:26 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 97 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=95/96 n=6 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97 pruub=13.648149490s) [1] async=[1] r=-1 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 55'1153 active pruub 228.261291504s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:26 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 97 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=95/96 n=6 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97 pruub=13.648069382s) [1] r=-1 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 228.261291504s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:26 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 97 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97 pruub=13.648262978s) [1] async=[1] r=-1 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 55'1153 active pruub 228.261367798s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:26 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 97 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/76 les/c/f=96/77/0 sis=97 pruub=13.647994041s) [1] r=-1 lpr=97 pi=[76,97)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 228.261367798s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:26 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 29 02:19:26 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 29 02:19:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:28 np0005539552 python3.9[88338]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:28 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 29 02:19:28 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 29 02:19:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:29.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:29 np0005539552 python3.9[88488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 02:19:29 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 02:19:30 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 02:19:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:30.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:30 np0005539552 python3.9[88643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539552 python3.9[88801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:19:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 02:19:32 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 02:19:32 np0005539552 python3.9[88886]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:19:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:33 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 02:19:33 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 02:19:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 02:19:35 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 02:19:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:36.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:36 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 02:19:36 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Nov 29 02:19:36 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Nov 29 02:19:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 02:19:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:41 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 100 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=100 pruub=8.562214851s) [1] r=-1 lpr=100 pi=[71,100)/1 crt=55'1153 mlcod 0'0 active pruub 237.905685425s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:41 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 100 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=100 pruub=8.561785698s) [1] r=-1 lpr=100 pi=[71,100)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 237.905685425s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:41 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 100 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=100 pruub=8.561243057s) [1] r=-1 lpr=100 pi=[71,100)/1 crt=55'1153 mlcod 0'0 active pruub 237.905578613s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:41 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 100 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=100 pruub=8.560987473s) [1] r=-1 lpr=100 pi=[71,100)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 237.905578613s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:41.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:42 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 29 02:19:42 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 29 02:19:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:19:43 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 101 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=0 lpr=101 pi=[71,101)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:43 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 101 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=0 lpr=101 pi=[71,101)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:43 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 101 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=6 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=0 lpr=101 pi=[71,101)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:43 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 101 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=5 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] r=0 lpr=101 pi=[71,101)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:44 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 29 02:19:44 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 29 02:19:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:19:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:45.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:19:45 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 29 02:19:45 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 29 02:19:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:19:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 02:19:46 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 102 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=101/102 n=5 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] async=[1] r=0 lpr=101 pi=[71,101)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:46 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 102 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=101/102 n=6 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=101) [1]/[2] async=[1] r=0 lpr=101 pi=[71,101)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:47.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:47 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 29 02:19:47 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 29 02:19:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:19:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:49.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:19:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 02:19:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:50 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 103 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=101/102 n=6 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103 pruub=12.130395889s) [1] async=[1] r=-1 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 55'1153 active pruub 250.792587280s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:50 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 103 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=101/102 n=6 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103 pruub=12.130276680s) [1] r=-1 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 250.792587280s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:50 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 103 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=101/102 n=5 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103 pruub=12.122555733s) [1] async=[1] r=-1 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 55'1153 active pruub 250.785812378s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:50 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 103 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=101/102 n=5 ec=59/49 lis/c=101/71 les/c/f=102/73/0 sis=103 pruub=12.122286797s) [1] r=-1 lpr=103 pi=[71,103)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 250.785812378s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:51.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 02:19:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:19:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:53.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 02:19:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:19:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:55.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:19:55 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 29 02:19:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:56.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:19:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:19:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:58 np0005539552 ceph-osd[79800]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 29 02:19:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:19:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:59.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:00 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:20:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:01.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 02:20:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:20:04 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:20:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 02:20:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 02:20:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 02:20:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 02:20:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:06.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 02:20:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:08.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:10.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:11.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:13.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 02:20:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:14.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 02:20:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:17.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 02:20:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 02:20:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:21.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:22.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 02:20:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:23.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:26.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:27.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 02:20:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:28.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:29.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:30.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 02:20:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 02:20:32 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 117 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=4 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117 pruub=13.707606316s) [1] r=-1 lpr=117 pi=[76,117)/1 crt=55'1153 mlcod 0'0 active pruub 293.985321045s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:32 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 117 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=4 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117 pruub=13.707465172s) [1] r=-1 lpr=117 pi=[76,117)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 293.985321045s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:20:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:20:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:20:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 02:20:33 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 118 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=4 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=118) [1]/[2] r=0 lpr=118 pi=[76,118)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:33 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 118 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=76/77 n=4 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=118) [1]/[2] r=0 lpr=118 pi=[76,118)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:20:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 02:20:34 np0005539552 python3.9[89417]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:20:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:35.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:35 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 119 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=118/119 n=4 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=118) [1]/[2] async=[1] r=0 lpr=118 pi=[76,118)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 02:20:36 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 120 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=118/119 n=4 ec=59/49 lis/c=118/76 les/c/f=119/77/0 sis=120 pruub=15.067551613s) [1] async=[1] r=-1 lpr=120 pi=[76,120)/1 crt=55'1153 mlcod 55'1153 active pruub 299.733581543s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:36 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 120 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=118/119 n=4 ec=59/49 lis/c=118/76 les/c/f=119/77/0 sis=120 pruub=15.067327499s) [1] r=-1 lpr=120 pi=[76,120)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 299.733581543s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:36 np0005539552 python3.9[89705]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:20:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:38.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:38 np0005539552 python3.9[89857]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:20:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:39 np0005539552 python3.9[90010]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:40 np0005539552 python3.9[90162]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:20:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:40.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:41.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 02:20:41 np0005539552 python3.9[90332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:42.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:42 np0005539552 podman[90660]: 2025-11-29 07:20:42.446870073 +0000 UTC m=+0.147813801 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:20:42 np0005539552 podman[90660]: 2025-11-29 07:20:42.548754283 +0000 UTC m=+0.249698011 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:20:42 np0005539552 python3.9[90701]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:43 np0005539552 python3.9[90857]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:43.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:43 np0005539552 podman[90923]: 2025-11-29 07:20:43.942543455 +0000 UTC m=+0.802163992 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:20:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:44 np0005539552 python3.9[91105]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:45 np0005539552 podman[91019]: 2025-11-29 07:20:45.168495313 +0000 UTC m=+1.198190641 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:20:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:45.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:45 np0005539552 podman[90923]: 2025-11-29 07:20:45.381015585 +0000 UTC m=+2.240636112 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:20:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539552 podman[91217]: 2025-11-29 07:20:46.098474912 +0000 UTC m=+0.087584130 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Nov 29 02:20:46 np0005539552 podman[91261]: 2025-11-29 07:20:46.175921171 +0000 UTC m=+0.056534534 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2)
Nov 29 02:20:46 np0005539552 podman[91217]: 2025-11-29 07:20:46.186097102 +0000 UTC m=+0.175206310 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Nov 29 02:20:46 np0005539552 python3.9[91326]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:20:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:47 np0005539552 python3.9[91580]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:20:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:47.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:47 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:47 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:48 np0005539552 python3.9[91764]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:20:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:49.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:49 np0005539552 python3.9[91917]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:20:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 02:20:49 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 122 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=122) [2] r=0 lpr=122 pi=[80,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:20:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:20:50 np0005539552 python3.9[92070]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:20:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 02:20:50 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 123 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=123) [2]/[1] r=-1 lpr=123 pi=[80,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:50 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 123 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=80/80 les/c/f=81/81/0 sis=123) [2]/[1] r=-1 lpr=123 pi=[80,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:51.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:20:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 02:20:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:52.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:20:52 np0005539552 python3.9[92224]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 02:20:52 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 125 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=123/80 les/c/f=124/81/0 sis=125) [2] r=0 lpr=125 pi=[80,125)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:52 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 125 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=123/80 les/c/f=124/81/0 sis=125) [2] r=0 lpr=125 pi=[80,125)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:53.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:53 np0005539552 python3.9[92376]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:53 np0005539552 python3.9[92454]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:20:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:54 np0005539552 python3.9[92607]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 02:20:55 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 126 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=125/126 n=4 ec=59/49 lis/c=123/80 les/c/f=124/81/0 sis=125) [2] r=0 lpr=125 pi=[80,125)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:55 np0005539552 python3.9[92685]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:55.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:56 np0005539552 python3.9[92837]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:20:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:57.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:20:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:20:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:59.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:00 np0005539552 python3.9[92990]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:01 np0005539552 python3.9[93193]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:21:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:01.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:01 np0005539552 python3.9[93343]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 02:21:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 127 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=91/92 n=7 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=127 pruub=11.664223671s) [0] r=-1 lpr=127 pi=[91,127)/1 crt=55'1153 mlcod 0'0 active pruub 323.308563232s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:03 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 127 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=91/92 n=7 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=127 pruub=11.664044380s) [0] r=-1 lpr=127 pi=[91,127)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 323.308563232s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:03 np0005539552 python3.9[93546]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:03.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:03 np0005539552 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:21:03 np0005539552 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:21:03 np0005539552 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:21:03 np0005539552 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:21:03 np0005539552 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:21:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:21:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:21:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:04 np0005539552 python3.9[93709]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:21:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:05.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 02:21:06 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 128 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=91/92 n=7 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=128) [0]/[2] r=0 lpr=128 pi=[91,128)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:06 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 128 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=91/92 n=7 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=128) [0]/[2] r=0 lpr=128 pi=[91,128)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:07.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:08 np0005539552 python3.9[93863]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 02:21:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:09.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:09 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 129 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=128/129 n=7 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=128) [0]/[2] async=[0] r=0 lpr=128 pi=[91,128)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:21:09 np0005539552 python3.9[94017]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:11 np0005539552 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 02:21:11 np0005539552 systemd[1]: session-35.scope: Consumed 1min 6.305s CPU time.
Nov 29 02:21:11 np0005539552 systemd-logind[788]: Session 35 logged out. Waiting for processes to exit.
Nov 29 02:21:11 np0005539552 systemd-logind[788]: Removed session 35.
Nov 29 02:21:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 02:21:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:15.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 02:21:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 131 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=128/129 n=7 ec=59/49 lis/c=128/91 les/c/f=129/92/0 sis=131 pruub=9.557481766s) [0] async=[0] r=-1 lpr=131 pi=[91,131)/1 crt=55'1153 mlcod 55'1153 active pruub 333.811584473s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:16 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 131 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=128/129 n=7 ec=59/49 lis/c=128/91 les/c/f=129/92/0 sis=131 pruub=9.557334900s) [0] r=-1 lpr=131 pi=[91,131)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 333.811584473s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:16 np0005539552 systemd-logind[788]: New session 36 of user zuul.
Nov 29 02:21:16 np0005539552 systemd[1]: Started Session 36 of User zuul.
Nov 29 02:21:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:17 np0005539552 python3.9[94203]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:19 np0005539552 python3.9[94360]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:21:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 02:21:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:20 np0005539552 python3.9[94514]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:21:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 02:21:20 np0005539552 python3.9[94599]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:21:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:23.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:24 np0005539552 python3.9[94804]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:25.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 02:21:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:27.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:28 np0005539552 python3.9[94959]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:21:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:29.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:29 np0005539552 python3.9[95112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:30 np0005539552 python3.9[95266]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:21:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:31.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:32 np0005539552 python3.9[95416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:32 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:21:33 np0005539552 python3.9[95575]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:35.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:21:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 1..738) lease_timeout -- calling new election
Nov 29 02:21:38 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:21:38 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(24) init, last seen epoch 24
Nov 29 02:21:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:39.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:39.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:40 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:21:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:41.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:45.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:46 np0005539552 ceph-mon[77121]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:46 np0005539552 ceph-mon[77121]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:21:46 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 135 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=2 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=135 pruub=11.515113831s) [0] r=-1 lpr=135 pi=[71,135)/1 crt=55'1153 mlcod 0'0 active pruub 365.908447266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:46 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 135 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=2 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=135 pruub=11.515023232s) [0] r=-1 lpr=135 pi=[71,135)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 365.908447266s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:47.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:47 np0005539552 python3.9[95785]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:49.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:49 np0005539552 python3.9[96073]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:21:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:49.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 136 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=2 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=136) [0]/[2] r=0 lpr=136 pi=[71,136)/1 crt=55'1153 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:49 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 136 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=71/73 n=2 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=136) [0]/[2] r=0 lpr=136 pi=[71,136)/1 crt=55'1153 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:50 np0005539552 python3.9[96223]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:21:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:51.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:21:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 02:21:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:51 np0005539552 python3.9[96378]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 02:21:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:54 np0005539552 python3.9[96532]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:54 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:21:54 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(27) init, last seen epoch 27, mid-election, bumping
Nov 29 02:21:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:54 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 138 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=136/138 n=2 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=136) [0]/[2] async=[0] r=0 lpr=136 pi=[71,136)/1 crt=55'1153 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:21:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:55.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:55.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:21:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:57.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:21:57 np0005539552 python3.9[96687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:58 np0005539552 python3.9[96842]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 02:21:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:59.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:21:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:59.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:00 np0005539552 systemd-logind[788]: Session 36 logged out. Waiting for processes to exit.
Nov 29 02:22:00 np0005539552 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 02:22:00 np0005539552 systemd[1]: session-36.scope: Consumed 19.892s CPU time.
Nov 29 02:22:00 np0005539552 systemd-logind[788]: Removed session 36.
Nov 29 02:22:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:22:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:01 np0005539552 podman[97041]: 2025-11-29 07:22:01.480724244 +0000 UTC m=+0.070408938 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 29 02:22:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:01.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:01 np0005539552 podman[97041]: 2025-11-29 07:22:01.604099195 +0000 UTC m=+0.193783859 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:22:01 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:22:01 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 139 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=136/138 n=2 ec=59/49 lis/c=136/71 les/c/f=138/73/0 sis=139 pruub=8.630207062s) [0] async=[0] r=-1 lpr=139 pi=[71,139)/1 crt=55'1153 mlcod 55'1153 active pruub 378.655517578s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:01 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 139 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=136/138 n=2 ec=59/49 lis/c=136/71 les/c/f=138/73/0 sis=139 pruub=8.629883766s) [0] r=-1 lpr=139 pi=[71,139)/1 crt=55'1153 mlcod 0'0 unknown NOTIFY pruub 378.655517578s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:02 np0005539552 podman[97197]: 2025-11-29 07:22:02.296809887 +0000 UTC m=+0.081737696 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:22:02 np0005539552 podman[97197]: 2025-11-29 07:22:02.331294342 +0000 UTC m=+0.116222131 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:22:02 np0005539552 podman[97313]: 2025-11-29 07:22:02.565491896 +0000 UTC m=+0.064830326 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, version=2.2.4, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 02:22:02 np0005539552 podman[97313]: 2025-11-29 07:22:02.580009195 +0000 UTC m=+0.079347615 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, release=1793, vcs-type=git, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=)
Nov 29 02:22:02 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 139 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=139) [2] r=0 lpr=139 pi=[97,139)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:03.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.042918) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925043018, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 8051, "num_deletes": 257, "total_data_size": 16633974, "memory_usage": 16894128, "flush_reason": "Manual Compaction"}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925123419, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 10180022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 8056, "table_properties": {"data_size": 10146956, "index_size": 21815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 97712, "raw_average_key_size": 24, "raw_value_size": 10068067, "raw_average_value_size": 2480, "num_data_blocks": 953, "num_entries": 4059, "num_filter_entries": 4059, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 1764400508, "file_creation_time": 1764400925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 80584 microseconds, and 23118 cpu microseconds.
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.123512) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 10180022 bytes OK
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.123534) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.126384) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.126409) EVENT_LOG_v1 {"time_micros": 1764400925126403, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.126427) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 16590584, prev total WAL file size 16610832, number of live WAL files 2.
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.131082) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(9941KB) 8(1648B)]
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925131209, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 10181670, "oldest_snapshot_seqno": -1}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3805 keys, 10176182 bytes, temperature: kUnknown
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925199884, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 10176182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10143832, "index_size": 21744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 93472, "raw_average_key_size": 24, "raw_value_size": 10068125, "raw_average_value_size": 2646, "num_data_blocks": 952, "num_entries": 3805, "num_filter_entries": 3805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764400925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.200148) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 10176182 bytes
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.201814) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.1 rd, 148.0 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(9.7, 0.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 4064, records dropped: 259 output_compression: NoCompression
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.201842) EVENT_LOG_v1 {"time_micros": 1764400925201830, "job": 4, "event": "compaction_finished", "compaction_time_micros": 68754, "compaction_time_cpu_micros": 24127, "output_level": 6, "num_output_files": 1, "total_output_size": 10176182, "num_input_records": 4064, "num_output_records": 3805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925203741, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925203811, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 02:22:05 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:22:05.130916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:22:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:05 np0005539552 systemd-logind[788]: New session 37 of user zuul.
Nov 29 02:22:05 np0005539552 systemd[1]: Started Session 37 of User zuul.
Nov 29 02:22:06 np0005539552 python3.9[97568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 02:22:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:22:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 141 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=141) [2]/[1] r=-1 lpr=141 pi=[97,141)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:06 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 141 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=97/97 les/c/f=98/98/0 sis=141) [2]/[1] r=-1 lpr=141 pi=[97,141)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:07.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:07.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:22:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539552 python3.9[97784]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 02:22:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:09 np0005539552 python3.9[97978]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:09 np0005539552 ceph-mon[77121]: Health check failed: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:22:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:22:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:22:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:09.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:09.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 02:22:09 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 143 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=141/97 les/c/f=142/98/0 sis=143) [2] r=0 lpr=143 pi=[97,143)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:09 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 143 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=141/97 les/c/f=142/98/0 sis=143) [2] r=0 lpr=143 pi=[97,143)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:09 np0005539552 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 02:22:09 np0005539552 systemd[1]: session-37.scope: Consumed 2.461s CPU time.
Nov 29 02:22:09 np0005539552 systemd-logind[788]: Session 37 logged out. Waiting for processes to exit.
Nov 29 02:22:09 np0005539552 systemd-logind[788]: Removed session 37.
Nov 29 02:22:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:11.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 02:22:11 np0005539552 ceph-osd[79800]: osd.2 pg_epoch: 144 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=143/144 n=5 ec=59/49 lis/c=141/97 les/c/f=142/98/0 sis=143) [2] r=0 lpr=143 pi=[97,143)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 02:22:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:13.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:13 np0005539552 ceph-mon[77121]: Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops)
Nov 29 02:22:13 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:22:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:13.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 02:22:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:15.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539552 systemd-logind[788]: New session 38 of user zuul.
Nov 29 02:22:15 np0005539552 systemd[1]: Started Session 38 of User zuul.
Nov 29 02:22:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 02:22:16 np0005539552 python3.9[98161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:22:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:22:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:17.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:18 np0005539552 python3.9[98365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 02:22:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:19 np0005539552 python3.9[98522]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:19.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:19 np0005539552 python3.9[98606]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 02:22:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 02:22:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:21.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:22 np0005539552 python3.9[98760]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 02:22:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:23.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:23.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:24 np0005539552 python3.9[99006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:25.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:25.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:25 np0005539552 python3.9[99159]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:26 np0005539552 python3.9[99325]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:27.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:27.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:27 np0005539552 python3.9[99403]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:28 np0005539552 python3.9[99556]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:29 np0005539552 python3.9[99634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:29.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:30 np0005539552 python3.9[99786]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:31 np0005539552 python3.9[99941]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:31.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:31.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:31 np0005539552 python3.9[100093]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:32 np0005539552 python3.9[100245]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:33.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:33 np0005539552 python3.9[100398]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:35.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:35.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:36 np0005539552 python3.9[100552]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:37 np0005539552 python3.9[100707]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:37.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:37.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:38 np0005539552 python3.9[100859]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:39 np0005539552 python3.9[101012]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:39.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:39.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:40 np0005539552 python3.9[101165]: ansible-service_facts Invoked
Nov 29 02:22:40 np0005539552 network[101183]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:22:40 np0005539552 network[101184]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:22:40 np0005539552 network[101185]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:22:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:41.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:43.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:45.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:45.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:47.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:47.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:49.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:49.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:51 np0005539552 python3.9[101692]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:22:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:51.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:22:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:51.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:53.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:53.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:54 np0005539552 python3.9[101848]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:22:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:22:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:22:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:55.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:56 np0005539552 python3.9[102001]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:57 np0005539552 python3.9[102080]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:57.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:57.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:58 np0005539552 python3.9[102232]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:58 np0005539552 python3.9[102311]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:22:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:01 np0005539552 python3.9[102464]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:01.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:01.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:03 np0005539552 python3.9[102667]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:23:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:23:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:23:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:03.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:04 np0005539552 python3.9[102751]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:05 np0005539552 systemd-logind[788]: Session 38 logged out. Waiting for processes to exit.
Nov 29 02:23:05 np0005539552 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 02:23:05 np0005539552 systemd[1]: session-38.scope: Consumed 26.745s CPU time.
Nov 29 02:23:05 np0005539552 systemd-logind[788]: Removed session 38.
Nov 29 02:23:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:05.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:07.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:07.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:09.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:09.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:10 np0005539552 systemd-logind[788]: New session 39 of user zuul.
Nov 29 02:23:10 np0005539552 systemd[1]: Started Session 39 of User zuul.
Nov 29 02:23:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:23:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:11.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:23:11 np0005539552 python3.9[102937]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:12 np0005539552 python3.9[103089]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:13 np0005539552 python3.9[103168]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:13 np0005539552 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 02:23:13 np0005539552 systemd[1]: session-39.scope: Consumed 1.729s CPU time.
Nov 29 02:23:13 np0005539552 systemd-logind[788]: Session 39 logged out. Waiting for processes to exit.
Nov 29 02:23:13 np0005539552 systemd-logind[788]: Removed session 39.
Nov 29 02:23:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:13.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:23:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:23:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:18 np0005539552 podman[103367]: 2025-11-29 07:23:18.046078046 +0000 UTC m=+0.091840705 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:23:18 np0005539552 podman[103367]: 2025-11-29 07:23:18.160377461 +0000 UTC m=+0.206139950 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:23:18 np0005539552 podman[103519]: 2025-11-29 07:23:18.850544085 +0000 UTC m=+0.059181134 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:23:18 np0005539552 podman[103519]: 2025-11-29 07:23:18.862338064 +0000 UTC m=+0.070975123 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:23:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:23:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:23:19 np0005539552 systemd-logind[788]: New session 40 of user zuul.
Nov 29 02:23:19 np0005539552 systemd[1]: Started Session 40 of User zuul.
Nov 29 02:23:19 np0005539552 podman[103584]: 2025-11-29 07:23:19.877234747 +0000 UTC m=+0.071052215 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Nov 29 02:23:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:19 np0005539552 podman[103584]: 2025-11-29 07:23:19.895428323 +0000 UTC m=+0.089245771 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, architecture=x86_64, io.openshift.expose-services=)
Nov 29 02:23:21 np0005539552 python3.9[103766]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:23:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:21.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:22 np0005539552 python3.9[103922]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:23 np0005539552 python3.9[104148]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:23.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539552 python3.9[104226]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=._20m72u_ recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:24 np0005539552 python3.9[104379]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:25 np0005539552 python3.9[104515]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.l7q_mgsa recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:25.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:26 np0005539552 python3.9[104740]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:26 np0005539552 python3.9[104893]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:27 np0005539552 python3.9[104971]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:27 np0005539552 python3.9[105123]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:28 np0005539552 python3.9[105201]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:29 np0005539552 python3.9[105354]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:30 np0005539552 python3.9[105506]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:30 np0005539552 python3.9[105584]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:31 np0005539552 python3.9[105737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:31 np0005539552 python3.9[105815]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:33 np0005539552 python3.9[105968]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:33 np0005539552 systemd[1]: Reloading.
Nov 29 02:23:33 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:33 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:33.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:34 np0005539552 python3.9[106158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:34 np0005539552 python3.9[106237]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:35.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:35 np0005539552 python3.9[106389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:23:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:23:36 np0005539552 python3.9[106467]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:36 np0005539552 python3.9[106620]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:36 np0005539552 systemd[1]: Reloading.
Nov 29 02:23:37 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:37 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:37 np0005539552 systemd[1]: Starting Create netns directory...
Nov 29 02:23:37 np0005539552 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:23:37 np0005539552 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:23:37 np0005539552 systemd[1]: Finished Create netns directory.
Nov 29 02:23:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:37.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:38 np0005539552 python3.9[106810]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:23:38 np0005539552 network[106828]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:23:38 np0005539552 network[106829]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:23:38 np0005539552 network[106830]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:23:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:39.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:23:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:39.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:23:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:41.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:43.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:43.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:44 np0005539552 python3.9[107144]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:44 np0005539552 python3.9[107223]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:45 np0005539552 python3.9[107375]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:46 np0005539552 python3.9[107528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:47 np0005539552 python3.9[107606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:48 np0005539552 python3.9[107758]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:23:48 np0005539552 systemd[1]: Starting Time & Date Service...
Nov 29 02:23:48 np0005539552 systemd[1]: Started Time & Date Service.
Nov 29 02:23:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:49.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:49.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:50 np0005539552 python3.9[107915]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:50 np0005539552 python3.9[108068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:51 np0005539552 python3.9[108146]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:51.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:51 np0005539552 python3.9[108348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:52 np0005539552 python3.9[108426]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.s7w98l1f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:53 np0005539552 python3.9[108579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:53 np0005539552 python3.9[108657]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:53.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539552 python3.9[108810]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:23:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:55 np0005539552 python3[108963]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:23:57 np0005539552 python3.9[109116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:57 np0005539552 python3.9[109194]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:57.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:58 np0005539552 python3.9[109346]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:59 np0005539552 python3.9[109425]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:23:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:59 np0005539552 python3.9[109577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:00 np0005539552 python3.9[109655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:01 np0005539552 python3.9[109808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:01 np0005539552 python3.9[109886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:02.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:02 np0005539552 python3.9[110039]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:03 np0005539552 python3.9[110124]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:04 np0005539552 python3.9[110319]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:05 np0005539552 python3.9[110475]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:05.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:06 np0005539552 python3.9[110627]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:06 np0005539552 python3.9[110780]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:07.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:07 np0005539552 python3.9[110932]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:24:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:08.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:08 np0005539552 python3.9[111084]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:24:08 np0005539552 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 02:24:08 np0005539552 systemd[1]: session-40.scope: Consumed 31.321s CPU time.
Nov 29 02:24:08 np0005539552 systemd-logind[788]: Session 40 logged out. Waiting for processes to exit.
Nov 29 02:24:08 np0005539552 systemd-logind[788]: Removed session 40.
Nov 29 02:24:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:10.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:12.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:13.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:14.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:14 np0005539552 systemd-logind[788]: New session 41 of user zuul.
Nov 29 02:24:14 np0005539552 systemd[1]: Started Session 41 of User zuul.
Nov 29 02:24:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:15 np0005539552 python3.9[111270]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:24:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:16.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:16 np0005539552 python3.9[111422]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:17 np0005539552 python3.9[111577]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 02:24:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:17.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:18.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:18 np0005539552 python3.9[111729]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.49lv_sq1 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:18 np0005539552 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:24:19 np0005539552 python3.9[111857]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.49lv_sq1 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401057.7070615-109-164949900731843/.source.49lv_sq1 _original_basename=.20giijs4 follow=False checksum=a1a59eb28f721bfa8fb748cb88539cd5cab8b099 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:19.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:20 np0005539552 python3.9[112009]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:21 np0005539552 python3.9[112162]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfIZbQlJSY8OFW9gaKZpL5AOJYgHeGcUU4xMMLWNL/xUPPZkDRJ+0oOBxm1GBsA8W/sQVZWDc//tIOaPRg0Ts5mepXlfGs0Url+hpuUxGZNLWaIiPfHq1tUx7zM7eWeUlVhlBayXU+bDoHZDE1TezLFLi49CXlrQuy/1Fb5Ju8aYVVJNoRltLwGKo8JrHv8UnYQ29iZPFO7+AEqgSmsEyz9hjMO7qStFsK0Z4RYJrbTZ/AMj8FNebCRWGtc2weikdIjLid5Z20teORSzpJW4jLDvRkyg92/WdI7iFDyHhslm5uNGHqqE2uRPqQFTZ7tdP6IJzfhJms7WfRdsOS7qJdAeOLzhn/EcmLaKoST1KzKZYzMdAtqrHDPDth+ERDeHtT8CEHNFNgwH4Drtp7YWlKZyVPsv6dK3iVC5WQ4Smet9VXXpZhT8JcQr97oS6/QJ/gT2yzHqH9vE62bRuuVM3lwDNiZkdn1nVbxa8d58RY3T49As7qmlP5Y43puhyXDWU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvBaB2c/CSsrpPIGSKo/yIA8NKQbrk/1m+GY/Ma4/XG#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX/VzLQPSOCPDMQMb838UxHYaVIDkLBboGMSvw1EX6MmRkAHKbJbJizg3TXu8nfZimb1PW1TRaFLHQkljXQfhA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ3gJW4xxSNpckw2TbtUBxTZruxTxiPlDkOB8Y4ICZA576sHCsss1Ph5y2zOkXYsz9fpf2TwDKPQIVDfUxQL2k42AS2PWqcJCelaMaAxDGDVmytzhvJO+0vO0kZSFoRnDYDxt2IUjJS2VV4xS4L9mRqjK8zsSYyINET0BAxRep9xLeUV0pztWwkopYucpBL9nU+ZMkA5y3nRMxInQNfxZwW5O2P7v+HScnTy2CUe+79l+0TMU0N6uM79jmcAAH5zDqSdRx1VS+lr4cWeNOPxGiXzEepk+MRml6Y0uGKdtdlboqK6kvYfSNkkhFmtXsnvtNQyA8UDSAercKYAeSPfJftqXmHbVvAY+Ky5R22RivRx7jpubqimyS4Tab95yEzsLi6hEQ2OW1pZleLTnr31vNLojOAxtrIY7YgkPSo3yrbURsfLyldLo3LfSlYfkTpkQFE2CajUrAitfcz+uMi9UVw0jCs+cC6uvKZdzu9Flnc8SDq2rMPIHuEP+9CACVSTU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxCaPCuKLUncOQ8c8c4/3OodUXgAR3WjvU4uCVk4XkO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7zHYLiINcKCNo52qkzrmctOgzvnHIchoPMaZyVaf/Aonhb5ntaWhlnHGxOVN+ZUQQOMPIjt7zIO4FB9IYg2xw=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DvicBdqy7dEZlHZpy7m/TwUChtVXFipP55AL4//M7HIh4A4ZWW0M0pb4E4WsXc1Y99eeNf5R+fmafWv5Z2x8Tq9KiRM9wQGSEJo1Sp7Ant8TcIyfbWCUIhmGAfkYUT2iUTjyyBrBL7iGVxJbYtCagodoXoIL4MSkgeZpadFa4XI4DieFBF95zOzXF6Z9RVUiocOG6vaogo3k/wTemQxQ/dlVV7SPrtj+GoZEUpeNlAKRbkAB8PNee/Ne+abzClpRp50s2pAh7smZFmL0O+wDOgWwFImPpxCkh4nR/3IJq6O53KXSl9jR4X/vmJHpFEHC6oZX5/hfwaJTfvvELB5cjzaFh3mzFweGkQq82VhAAxVksDTO2+aUZFGDJbMSvjPTSTEl+qx+GAl7E0KnzST+NMnd5qplw0KIj+BBZgkZtKK8kAsxxRU3zDMDotlvIDG1KYN+wOGRG2Cy2afXmGFIFYdzOFlvkAwmv9yhY5u5OlWxzuiZEOcqJ0dGS1e0hk8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFq0l7tgdUK0C+AqSmZJQ8Y9Z17ynv3L7Gso+BnrUJe7#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWT8H4lhVkE+892UU3HiUydE/Wuy5lmeTLAJzcPPkEmKKDZLorB5daY+peHiUZWU/JHax1i6VTJiGCUcfBK9Vw=#012 create=True mode=0644 path=/tmp/ansible.49lv_sq1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:21.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:22 np0005539552 python3.9[112314]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.49lv_sq1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:23 np0005539552 python3.9[112469]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.49lv_sq1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:23 np0005539552 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 02:24:23 np0005539552 systemd[1]: session-41.scope: Consumed 5.526s CPU time.
Nov 29 02:24:23 np0005539552 systemd-logind[788]: Session 41 logged out. Waiting for processes to exit.
Nov 29 02:24:23 np0005539552 systemd-logind[788]: Removed session 41.
Nov 29 02:24:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:30 np0005539552 systemd-logind[788]: New session 42 of user zuul.
Nov 29 02:24:30 np0005539552 systemd[1]: Started Session 42 of User zuul.
Nov 29 02:24:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:30.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:31 np0005539552 python3.9[112702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:32.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:32 np0005539552 python3.9[112858]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.016103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401073016255, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1560, "num_deletes": 250, "total_data_size": 3812152, "memory_usage": 3851816, "flush_reason": "Manual Compaction"}
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 29 02:24:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401073618896, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1599504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8061, "largest_seqno": 9616, "table_properties": {"data_size": 1594022, "index_size": 2750, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13153, "raw_average_key_size": 20, "raw_value_size": 1582367, "raw_average_value_size": 2441, "num_data_blocks": 125, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400925, "oldest_key_time": 1764400925, "file_creation_time": 1764401073, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 602879 microseconds, and 5951 cpu microseconds.
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.618986) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1599504 bytes OK
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.619013) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.960379) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.960428) EVENT_LOG_v1 {"time_micros": 1764401073960418, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.960453) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3804870, prev total WAL file size 3820311, number of live WAL files 2.
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.961735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1562KB)], [15(9937KB)]
Nov 29 02:24:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401073961788, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11775686, "oldest_snapshot_seqno": -1}
Nov 29 02:24:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:34.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3990 keys, 9765805 bytes, temperature: kUnknown
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401074427406, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9765805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9733866, "index_size": 20885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97691, "raw_average_key_size": 24, "raw_value_size": 9656448, "raw_average_value_size": 2420, "num_data_blocks": 917, "num_entries": 3990, "num_filter_entries": 3990, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764401073, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.427754) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9765805 bytes
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.519008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.3 rd, 21.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(13.5) write-amplify(6.1) OK, records in: 4453, records dropped: 463 output_compression: NoCompression
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.519070) EVENT_LOG_v1 {"time_micros": 1764401074519048, "job": 6, "event": "compaction_finished", "compaction_time_micros": 465741, "compaction_time_cpu_micros": 22790, "output_level": 6, "num_output_files": 1, "total_output_size": 9765805, "num_input_records": 4453, "num_output_records": 3990, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401074519579, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 29 02:24:34 np0005539552 python3.9[113013]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401074521876, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:33.961649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.521977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.521986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.521988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.521990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:34 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:24:34.521991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:35 np0005539552 python3.9[113167]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:36.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:36 np0005539552 python3.9[113320]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:37 np0005539552 python3.9[113473]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:37 np0005539552 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 02:24:37 np0005539552 systemd[1]: session-42.scope: Consumed 4.005s CPU time.
Nov 29 02:24:37 np0005539552 systemd-logind[788]: Session 42 logged out. Waiting for processes to exit.
Nov 29 02:24:37 np0005539552 systemd-logind[788]: Removed session 42.
Nov 29 02:24:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:38.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:39.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:42 np0005539552 systemd-logind[788]: New session 43 of user zuul.
Nov 29 02:24:43 np0005539552 systemd[1]: Started Session 43 of User zuul.
Nov 29 02:24:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:44 np0005539552 python3.9[113704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:45 np0005539552 python3.9[113861]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:24:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:46 np0005539552 python3.9[113945]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:24:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:48 np0005539552 python3.9[114098]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:50 np0005539552 python3.9[114249]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:24:51 np0005539552 python3.9[114400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:51 np0005539552 python3.9[114646]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:52 np0005539552 systemd-logind[788]: Session 43 logged out. Waiting for processes to exit.
Nov 29 02:24:52 np0005539552 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 02:24:52 np0005539552 systemd[1]: session-43.scope: Consumed 6.294s CPU time.
Nov 29 02:24:52 np0005539552 systemd-logind[788]: Removed session 43.
Nov 29 02:24:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:53.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539552 podman[114868]: 2025-11-29 07:24:53.732980774 +0000 UTC m=+0.244843764 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:53 np0005539552 podman[114868]: 2025-11-29 07:24:53.876324929 +0000 UTC m=+0.388187929 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:54.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:55 np0005539552 podman[115027]: 2025-11-29 07:24:55.360294845 +0000 UTC m=+0.890272935 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:24:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:24:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:56 np0005539552 podman[115027]: 2025-11-29 07:24:56.002390878 +0000 UTC m=+1.532369008 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:24:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:24:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:24:58 np0005539552 podman[115092]: 2025-11-29 07:24:58.235840284 +0000 UTC m=+0.331806724 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 02:24:58 np0005539552 podman[115114]: 2025-11-29 07:24:58.318826078 +0000 UTC m=+0.053907719 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:24:58 np0005539552 podman[115092]: 2025-11-29 07:24:58.326088102 +0000 UTC m=+0.422054452 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2, release=1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 02:24:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539552 systemd-logind[788]: New session 44 of user zuul.
Nov 29 02:24:59 np0005539552 systemd[1]: Started Session 44 of User zuul.
Nov 29 02:24:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:24:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:59 np0005539552 python3.9[115412]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:00.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:01 np0005539552 python3.9[115569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:02.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:02 np0005539552 python3.9[115721]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:03 np0005539552 python3.9[115874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:04.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:04 np0005539552 python3.9[116047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401102.8062434-162-124046921270498/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=a546bc854500145cf9be5f18ad73a849b891366d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:04 np0005539552 python3.9[116200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:05 np0005539552 python3.9[116323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401104.3308985-162-62478475130217/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=96f02fb504411b0b161adf414c18934dfa96b5b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:05.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:06 np0005539552 python3.9[116475]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:06 np0005539552 python3.9[116599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401105.7309115-162-245128056214124/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=e39eb41377ecdb77614a4fc7f0179145fe5e20ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:07.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:07 np0005539552 python3.9[116751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:08.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:08 np0005539552 python3.9[116903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:25:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 1519 writes, 9992 keys, 1519 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 1519 writes, 1519 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1519 writes, 9992 keys, 1519 commit groups, 1.0 writes per commit group, ingest: 20.48 MB, 0.03 MB/s#012Interval WAL: 1519 writes, 1519 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.4      0.73              0.03         3    0.242       0      0       0.0       0.0#012  L6      1/0    9.31 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.7     39.2     35.6      0.53              0.05         2    0.267    8517    722       0.0       0.0#012 Sum      1/0    9.31 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     16.6     24.0      1.26              0.08         5    0.252    8517    722       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     17.2     24.8      1.22              0.08         4    0.304    8517    722       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     39.2     35.6      0.53              0.05         2    0.267    8517    722       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.4      0.68              0.03         2    0.342       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.03 MB/s read, 1.3 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.03 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 693.72 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(34,586.56 KB,0.188426%) FilterBlock(5,31.98 KB,0.0102746%) IndexBlock(5,75.17 KB,0.024148%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:25:08 np0005539552 python3.9[117056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:09 np0005539552 python3.9[117229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401108.4497411-336-62033089375932/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=b84cbf91e627769b0b676b723fbfe9900d4a5154 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:09.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:10 np0005539552 python3.9[117381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:11 np0005539552 python3.9[117505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401109.796012-336-25446785743280/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=061972f8260f53205da8541ffc96c3e0cb49837b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:11.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:11 np0005539552 python3.9[117657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:12 np0005539552 python3.9[117780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401111.3666883-336-93235212333367/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=9f6e21187bddf3213b68f9a600bd40af908cd878 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:13 np0005539552 python3.9[117933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:13.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:13 np0005539552 python3.9[118085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:14.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:14 np0005539552 python3.9[118238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:15 np0005539552 python3.9[118361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401114.2041638-511-102322782697236/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=a656bcdd69c6e78a86479c8d66d161c652184b2a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:15 np0005539552 python3.9[118513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:16.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:16 np0005539552 python3.9[118636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401115.460987-511-59943466278238/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=061972f8260f53205da8541ffc96c3e0cb49837b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:17 np0005539552 python3.9[118789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:17 np0005539552 python3.9[118912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401116.7717047-511-258467193968660/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=6099d6caab7e14e5773b200ae9e9a23d11c07efe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:18.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:19 np0005539552 python3.9[119065]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:19.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:19 np0005539552 python3.9[119217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:20.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:20 np0005539552 python3.9[119341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401119.4653428-716-202297118907359/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:21 np0005539552 python3.9[119493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:21.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:22 np0005539552 python3.9[119645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:22.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:22 np0005539552 python3.9[119770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401121.5727544-792-142114526857904/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:23 np0005539552 python3.9[119922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:23.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:24.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:24 np0005539552 python3.9[120124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:24 np0005539552 python3.9[120248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401123.7548265-864-191166752892412/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:25 np0005539552 python3.9[120400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:26.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:26 np0005539552 python3.9[120552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:26 np0005539552 python3.9[120676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401125.8031287-937-269703171468793/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:27.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:27 np0005539552 python3.9[120828]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:28.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:28 np0005539552 python3.9[120980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:29 np0005539552 python3.9[121104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401127.9537199-1011-87285203490348/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:30.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:30 np0005539552 python3.9[121256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:31 np0005539552 python3.9[121409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:31 np0005539552 python3.9[121532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401130.859831-1099-196834212346284/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:32.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:32 np0005539552 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 02:25:32 np0005539552 systemd[1]: session-44.scope: Consumed 24.411s CPU time.
Nov 29 02:25:32 np0005539552 systemd-logind[788]: Session 44 logged out. Waiting for processes to exit.
Nov 29 02:25:32 np0005539552 systemd-logind[788]: Removed session 44.
Nov 29 02:25:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:36.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:37.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:38 np0005539552 systemd-logind[788]: New session 45 of user zuul.
Nov 29 02:25:38 np0005539552 systemd[1]: Started Session 45 of User zuul.
Nov 29 02:25:39 np0005539552 python3.9[121716]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:40 np0005539552 python3.9[121868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:41 np0005539552 python3.9[121992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401139.762542-69-232114333585290/.source.conf _original_basename=ceph.conf follow=False checksum=dcade63291eb6ea0d49dedd3c47047e031c2100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:41.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:41 np0005539552 python3.9[122144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:42.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:42 np0005539552 python3.9[122267]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401141.3350387-69-256034584668639/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=ced193c31d6b83611be924c31eabde34732ad5bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:42 np0005539552 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 02:25:42 np0005539552 systemd[1]: session-45.scope: Consumed 2.847s CPU time.
Nov 29 02:25:42 np0005539552 systemd-logind[788]: Session 45 logged out. Waiting for processes to exit.
Nov 29 02:25:42 np0005539552 systemd-logind[788]: Removed session 45.
Nov 29 02:25:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:43.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:44.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:48 np0005539552 systemd-logind[788]: New session 46 of user zuul.
Nov 29 02:25:48 np0005539552 systemd[1]: Started Session 46 of User zuul.
Nov 29 02:25:49 np0005539552 python3.9[122501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:50.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:50 np0005539552 python3.9[122658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:51 np0005539552 python3.9[122810]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:25:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 5212 writes, 23K keys, 5212 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5212 writes, 747 syncs, 6.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5212 writes, 23K keys, 5212 commit groups, 1.0 writes per commit group, ingest: 18.74 MB, 0.03 MB/s#012Interval WAL: 5212 writes, 747 syncs, 6.98 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 02:25:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:52 np0005539552 python3.9[122960]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:53 np0005539552 python3.9[123113]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:25:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:54.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:25:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:25:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:58 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 02:25:58 np0005539552 python3.9[123271]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:25:59 np0005539552 python3.9[123356]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:25:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:25:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:26:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:26:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:01.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:02 np0005539552 python3.9[123510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:26:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:02.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:03 np0005539552 python3[123666]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 02:26:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:03 np0005539552 python3.9[123818]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:04.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:04 np0005539552 python3.9[124021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:05 np0005539552 python3.9[124099]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539552 python3.9[124251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:06 np0005539552 python3.9[124329]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.dfw29zaa recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:07 np0005539552 python3.9[124482]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:07 np0005539552 python3.9[124560]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:08 np0005539552 python3.9[124712]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:09 np0005539552 python3[124866]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:26:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:10 np0005539552 python3.9[125202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:10.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:10 np0005539552 python3.9[125328]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401169.6070917-439-112238360430749/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:11.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:12 np0005539552 podman[125183]: 2025-11-29 07:26:12.772686939 +0000 UTC m=+2.855021071 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:26:12 np0005539552 podman[125183]: 2025-11-29 07:26:12.883314998 +0000 UTC m=+2.965649040 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:26:12 np0005539552 python3.9[125480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:13 np0005539552 python3.9[125695]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401171.085706-483-55147736477305/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:14 np0005539552 python3.9[125909]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:14 np0005539552 python3.9[126035]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401173.7153385-528-71423819051360/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:15 np0005539552 python3.9[126187]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:16 np0005539552 python3.9[126312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401174.9991763-574-248110772789257/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:17 np0005539552 python3.9[126465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:17.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:17 np0005539552 python3.9[126590]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401176.4320261-618-112483230774056/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:19 np0005539552 python3.9[126743]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:19.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:20 np0005539552 python3.9[126895]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:20.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:20 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:26:20 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2[77117]: 2025-11-29T07:26:20.858+0000 7f88aff6a640 -1 mon.compute-2@1(peon).paxos(paxos updating c 252..1002) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.822183132s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:26:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 252..1002) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.822183132s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:26:20 np0005539552 python3.9[127051]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:21 np0005539552 podman[125745]: 2025-11-29 07:26:21.013960947 +0000 UTC m=+7.430997522 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:26:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:21 np0005539552 podman[125745]: 2025-11-29 07:26:21.441474234 +0000 UTC m=+7.858510779 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:26:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:21.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:21 np0005539552 python3.9[127221]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:21 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 29 02:26:21 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:21.955447) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:26:21 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 29 02:26:21 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401181955580, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1129, "num_deletes": 252, "total_data_size": 2570613, "memory_usage": 2610928, "flush_reason": "Manual Compaction"}
Nov 29 02:26:21 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182071843, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1686982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9621, "largest_seqno": 10745, "table_properties": {"data_size": 1681932, "index_size": 2574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10714, "raw_average_key_size": 19, "raw_value_size": 1671684, "raw_average_value_size": 3028, "num_data_blocks": 117, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401073, "oldest_key_time": 1764401073, "file_creation_time": 1764401181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 116415 microseconds, and 5999 cpu microseconds.
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.071884) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1686982 bytes OK
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.071902) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.181183) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.181231) EVENT_LOG_v1 {"time_micros": 1764401182181223, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.181253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2565134, prev total WAL file size 2565134, number of live WAL files 2.
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.183204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1647KB)], [18(9536KB)]
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182183241, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 11452787, "oldest_snapshot_seqno": -1}
Nov 29 02:26:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:22.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4020 keys, 9584957 bytes, temperature: kUnknown
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182546580, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 9584957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9553618, "index_size": 20157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99068, "raw_average_key_size": 24, "raw_value_size": 9476440, "raw_average_value_size": 2357, "num_data_blocks": 876, "num_entries": 4020, "num_filter_entries": 4020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764401182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.547328) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 9584957 bytes
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.550892) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 31.5 rd, 26.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 4542, records dropped: 522 output_compression: NoCompression
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.550923) EVENT_LOG_v1 {"time_micros": 1764401182550910, "job": 8, "event": "compaction_finished", "compaction_time_micros": 363888, "compaction_time_cpu_micros": 24882, "output_level": 6, "num_output_files": 1, "total_output_size": 9584957, "num_input_records": 4542, "num_output_records": 4020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182551449, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182552955, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.183064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.553021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.553025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.553026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.553028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:26:22.553029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539552 podman[127283]: 2025-11-29 07:26:22.589887004 +0000 UTC m=+0.344444102 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, version=2.2.4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, description=keepalived for Ceph)
Nov 29 02:26:22 np0005539552 podman[127386]: 2025-11-29 07:26:22.726932555 +0000 UTC m=+0.112773037 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-type=git, distribution-scope=public)
Nov 29 02:26:22 np0005539552 python3.9[127438]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:26:23 np0005539552 podman[127283]: 2025-11-29 07:26:23.072597709 +0000 UTC m=+0.827154807 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 02:26:23 np0005539552 python3.9[127592]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:24.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:24 np0005539552 python3.9[127772]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:25 np0005539552 python3.9[128079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:26:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:26.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:27 np0005539552 python3.9[128233]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:27 np0005539552 ovs-vsctl[128234]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 02:26:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:26:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:26:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:26:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:28 np0005539552 python3.9[128386]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:29 np0005539552 python3.9[128542]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:29 np0005539552 ovs-vsctl[128543]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 02:26:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:30.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:30 np0005539552 python3.9[128693]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:26:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:31.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:31 np0005539552 python3.9[128848]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:32 np0005539552 python3.9[129001]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:33 np0005539552 python3.9[129079]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:33 np0005539552 python3.9[129231]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:33.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:34 np0005539552 python3.9[129309]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:34.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:34 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:26:35 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:26:35 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:26:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:35 np0005539552 python3.9[129462]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:36.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:36 np0005539552 python3.9[129637]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:26:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:26:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:26:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:37.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:38 np0005539552 python3.9[129743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:38.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:38 np0005539552 python3.9[129896]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:39 np0005539552 python3.9[129974]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:39.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:40.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:40 np0005539552 python3.9[130126]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:26:40 np0005539552 systemd[1]: Reloading.
Nov 29 02:26:40 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:26:40 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:26:41 np0005539552 python3.9[130316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:41.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:42 np0005539552 python3.9[130394]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:42.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:42 np0005539552 python3.9[130547]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:43 np0005539552 python3.9[130625]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:43.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:44 np0005539552 python3.9[130777]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:26:44 np0005539552 systemd[1]: Reloading.
Nov 29 02:26:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:44.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:44 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:26:44 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:26:44 np0005539552 systemd[1]: Starting Create netns directory...
Nov 29 02:26:44 np0005539552 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:26:44 np0005539552 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:26:44 np0005539552 systemd[1]: Finished Create netns directory.
Nov 29 02:26:45 np0005539552 python3.9[131021]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:46.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:46 np0005539552 python3.9[131173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:46 np0005539552 python3.9[131297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401205.8391128-1371-152470346583832/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:48 np0005539552 python3.9[131449]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:49 np0005539552 python3.9[131602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:50 np0005539552 python3.9[131725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401208.7435021-1446-218697716486706/.source.json _original_basename=.zl0sp3h3 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:50 np0005539552 python3.9[131878]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:52.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:53 np0005539552 python3.9[132306]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 02:26:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:54.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:55 np0005539552 python3.9[132459]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:26:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:55.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:56.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:56 np0005539552 python3.9[132611]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:26:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:58 np0005539552 python3[132791]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:26:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:58.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:26:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:59.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:00.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:01.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:02.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:03.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:04.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:06.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539552 podman[132805]: 2025-11-29 07:27:07.20178304 +0000 UTC m=+9.035076873 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:07 np0005539552 podman[132977]: 2025-11-29 07:27:07.335806861 +0000 UTC m=+0.027582096 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539552 podman[132977]: 2025-11-29 07:27:07.843889145 +0000 UTC m=+0.535664360 container create 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:27:07 np0005539552 python3[132791]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:08.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:09.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:10.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:11 np0005539552 python3.9[133170]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:11.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:12.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:12 np0005539552 python3.9[133324]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:27:13 np0005539552 python3.9[133401]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:13.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:14 np0005539552 python3.9[133552]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401233.4691496-1710-12495360295767/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:27:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:14.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:14 np0005539552 python3.9[133628]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:27:14 np0005539552 systemd[1]: Reloading.
Nov 29 02:27:14 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:14 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:15 np0005539552 python3.9[133740]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:27:15 np0005539552 systemd[1]: Reloading.
Nov 29 02:27:15 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:15 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:15.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:15 np0005539552 systemd[1]: Starting ovn_controller container...
Nov 29 02:27:16 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:27:16 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ccb95222ad2c9823372fc1fa2b87d72af73799a49435a4c103d969884fe8ce/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:16 np0005539552 systemd[1]: Started /usr/bin/podman healthcheck run 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535.
Nov 29 02:27:16 np0005539552 podman[133782]: 2025-11-29 07:27:16.159733856 +0000 UTC m=+0.139730164 container init 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + sudo -E kolla_set_configs
Nov 29 02:27:16 np0005539552 podman[133782]: 2025-11-29 07:27:16.20342445 +0000 UTC m=+0.183420738 container start 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 02:27:16 np0005539552 edpm-start-podman-container[133782]: ovn_controller
Nov 29 02:27:16 np0005539552 systemd[1]: Created slice User Slice of UID 0.
Nov 29 02:27:16 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 02:27:16 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 02:27:16 np0005539552 systemd[1]: Starting User Manager for UID 0...
Nov 29 02:27:16 np0005539552 edpm-start-podman-container[133781]: Creating additional drop-in dependency for "ovn_controller" (6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535)
Nov 29 02:27:16 np0005539552 podman[133805]: 2025-11-29 07:27:16.285180598 +0000 UTC m=+0.071242499 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 02:27:16 np0005539552 systemd[1]: 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535-756b85c195064953.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:27:16 np0005539552 systemd[1]: 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535-756b85c195064953.service: Failed with result 'exit-code'.
Nov 29 02:27:16 np0005539552 systemd[1]: Reloading.
Nov 29 02:27:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:16 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:16 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:16 np0005539552 systemd[133833]: Queued start job for default target Main User Target.
Nov 29 02:27:16 np0005539552 systemd[133833]: Created slice User Application Slice.
Nov 29 02:27:16 np0005539552 systemd[133833]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 02:27:16 np0005539552 systemd[133833]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:27:16 np0005539552 systemd[133833]: Reached target Paths.
Nov 29 02:27:16 np0005539552 systemd[133833]: Reached target Timers.
Nov 29 02:27:16 np0005539552 systemd[133833]: Starting D-Bus User Message Bus Socket...
Nov 29 02:27:16 np0005539552 systemd[133833]: Starting Create User's Volatile Files and Directories...
Nov 29 02:27:16 np0005539552 systemd[133833]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:27:16 np0005539552 systemd[133833]: Finished Create User's Volatile Files and Directories.
Nov 29 02:27:16 np0005539552 systemd[133833]: Reached target Sockets.
Nov 29 02:27:16 np0005539552 systemd[133833]: Reached target Basic System.
Nov 29 02:27:16 np0005539552 systemd[133833]: Reached target Main User Target.
Nov 29 02:27:16 np0005539552 systemd[133833]: Startup finished in 150ms.
Nov 29 02:27:16 np0005539552 systemd[1]: Started User Manager for UID 0.
Nov 29 02:27:16 np0005539552 systemd[1]: Started ovn_controller container.
Nov 29 02:27:16 np0005539552 systemd[1]: Started Session c1 of User root.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: INFO:__main__:Validating config file
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: INFO:__main__:Writing out command to execute
Nov 29 02:27:16 np0005539552 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: ++ cat /run_command
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + ARGS=
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + sudo kolla_copy_cacerts
Nov 29 02:27:16 np0005539552 systemd[1]: Started Session c2 of User root.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + [[ ! -n '' ]]
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + . kolla_extend_start
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + umask 0022
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 02:27:16 np0005539552 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.7568] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.7576] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.7586] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.7590] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.7592] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 02:27:16 np0005539552 kernel: br-int: entered promiscuous mode
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00017|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00020|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00021|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00022|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00023|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00024|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 02:27:16 np0005539552 systemd-udevd[133934]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:16Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.8002] manager: (ovn-755ad2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.8007] manager: (ovn-a37d86-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 02:27:16 np0005539552 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.8184] device (genev_sys_6081): carrier: link connected
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.8188] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 02:27:16 np0005539552 NetworkManager[48926]: <info>  [1764401236.9775] manager: (ovn-a63f2f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 02:27:16 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:27:17 np0005539552 python3.9[134070]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:17 np0005539552 ovs-vsctl[134071]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 02:27:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:17.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:18 np0005539552 python3.9[134223]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:18 np0005539552 ovs-vsctl[134225]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 02:27:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:19 np0005539552 python3.9[134379]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:19 np0005539552 ovs-vsctl[134380]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 02:27:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:19 np0005539552 systemd[1]: session-46.scope: Deactivated successfully.
Nov 29 02:27:19 np0005539552 systemd[1]: session-46.scope: Consumed 59.981s CPU time.
Nov 29 02:27:19 np0005539552 systemd-logind[788]: Session 46 logged out. Waiting for processes to exit.
Nov 29 02:27:19 np0005539552 systemd-logind[788]: Removed session 46.
Nov 29 02:27:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:20.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:21.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:23.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:25 np0005539552 systemd-logind[788]: New session 48 of user zuul.
Nov 29 02:27:25 np0005539552 systemd[1]: Started Session 48 of User zuul.
Nov 29 02:27:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:25.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:26.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:26 np0005539552 python3.9[134611]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:27:26 np0005539552 systemd[1]: Stopping User Manager for UID 0...
Nov 29 02:27:26 np0005539552 systemd[133833]: Activating special unit Exit the Session...
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped target Main User Target.
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped target Basic System.
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped target Paths.
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped target Sockets.
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped target Timers.
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:27:26 np0005539552 systemd[133833]: Closed D-Bus User Message Bus Socket.
Nov 29 02:27:26 np0005539552 systemd[133833]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:27:26 np0005539552 systemd[133833]: Removed slice User Application Slice.
Nov 29 02:27:26 np0005539552 systemd[133833]: Reached target Shutdown.
Nov 29 02:27:26 np0005539552 systemd[133833]: Finished Exit the Session.
Nov 29 02:27:26 np0005539552 systemd[133833]: Reached target Exit the Session.
Nov 29 02:27:26 np0005539552 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 02:27:26 np0005539552 systemd[1]: Stopped User Manager for UID 0.
Nov 29 02:27:26 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 02:27:26 np0005539552 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 02:27:26 np0005539552 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 02:27:26 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 02:27:26 np0005539552 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 02:27:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:27 np0005539552 python3.9[134770]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:28.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:28 np0005539552 python3.9[134922]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:29 np0005539552 python3.9[135080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:30 np0005539552 python3.9[135234]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:31 np0005539552 python3.9[135386]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:31.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:32 np0005539552 python3.9[135536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:27:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:32.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:32 np0005539552 python3.9[135690]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:27:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:34 np0005539552 python3.9[135840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:35 np0005539552 python3.9[135963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401253.8587995-225-165671557496449/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:36.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:27:37 np0005539552 python3.9[136245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:38 np0005539552 python3.9[136366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401257.2200103-271-249020288108858/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:40.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:40 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:27:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:42.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos active c 503..1068) lease_timeout -- calling new election
Nov 29 02:27:43 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:27:43 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(30) init, last seen epoch 30
Nov 29 02:27:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:44.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:44 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:27:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:46.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:46 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:46Z|00025|memory|INFO|16384 kB peak resident set size after 30.1 seconds
Nov 29 02:27:46 np0005539552 ovn_controller[133798]: 2025-11-29T07:27:46Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 02:27:46 np0005539552 podman[136545]: 2025-11-29 07:27:46.927513059 +0000 UTC m=+0.150638554 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:27:47 np0005539552 python3.9[136584]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:27:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:47.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:48 np0005539552 python3.9[136682]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:27:48 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(31) init, last seen epoch 31, mid-election, bumping
Nov 29 02:27:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:48.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:27:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:51.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:52.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:52 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:27:52 np0005539552 python3.9[136838]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:53 np0005539552 python3.9[136991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:53.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:54 np0005539552 python3.9[137112]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401273.195016-381-261364695013105/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:54 np0005539552 python3.9[137263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:55 np0005539552 python3.9[137384]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401274.2896354-381-194272124508802/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:55.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:56 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:27:56 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:27:56 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:27:56 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:27:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:57 np0005539552 python3.9[137535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:57 np0005539552 python3.9[137656]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401276.5477612-514-275031824313887/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:57.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:58 np0005539552 python3.9[137806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:58 np0005539552 ceph-mon[77121]: mon.compute-1 calling monitor election
Nov 29 02:27:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:27:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:58 np0005539552 python3.9[137928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401277.7393942-514-133570850922142/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:59 np0005539552 python3.9[138078]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:27:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:59.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:01 np0005539552 python3.9[138233]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:01.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:02 np0005539552 python3.9[138385]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:02 np0005539552 python3.9[138464]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:03 np0005539552 python3.9[138616]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:03.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:04 np0005539552 python3.9[138694]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:04.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:05 np0005539552 python3.9[138847]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:05.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:06 np0005539552 python3.9[139049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:06.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:06 np0005539552 python3.9[139128]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:07 np0005539552 python3.9[139280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:07.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:07 np0005539552 python3.9[139358]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:08 np0005539552 python3.9[139511]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:28:08 np0005539552 systemd[1]: Reloading.
Nov 29 02:28:08 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:28:08 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:28:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:09.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:09 np0005539552 python3.9[139701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:10 np0005539552 python3.9[139779]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:10.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:11 np0005539552 python3.9[139932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:11 np0005539552 python3.9[140010]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:11.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:12.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:12 np0005539552 python3.9[140162]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:28:12 np0005539552 systemd[1]: Reloading.
Nov 29 02:28:12 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:28:12 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:28:12 np0005539552 systemd[1]: Starting Create netns directory...
Nov 29 02:28:12 np0005539552 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:28:12 np0005539552 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:28:12 np0005539552 systemd[1]: Finished Create netns directory.
Nov 29 02:28:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:13.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:13 np0005539552 python3.9[140356]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:14.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:14 np0005539552 python3.9[140508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:15 np0005539552 python3.9[140632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401294.1216662-967-155388477806258/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:16 np0005539552 python3.9[140784]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:17 np0005539552 podman[140909]: 2025-11-29 07:28:17.199347292 +0000 UTC m=+0.108452447 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:28:17 np0005539552 python3.9[140956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:17.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:18.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:19.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:20 np0005539552 python3.9[141087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401296.7825296-1041-224882594774189/.source.json _original_basename=.psongmr6 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:21 np0005539552 python3.9[141241]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:23 np0005539552 python3.9[141719]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 02:28:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:24 np0005539552 python3.9[141872]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:28:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:25 np0005539552 python3.9[142074]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:28:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:27.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:28 np0005539552 python3[142254]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:28:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:29.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:30.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:33.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:34.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:35.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:37.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:40.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:40 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:28:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:41.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:43.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 503..1101) lease_timeout -- calling new election
Nov 29 02:28:44 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:28:44 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(34) init, last seen epoch 34
Nov 29 02:28:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:44 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:28:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:45.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:46.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:47.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:28:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:48.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:28:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:49.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:50.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:51.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:52 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:28:53 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Nov 29 02:28:53 np0005539552 podman[142465]: 2025-11-29 07:28:53.550919275 +0000 UTC m=+5.626417431 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 02:28:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:53.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:55 np0005539552 podman[142267]: 2025-11-29 07:28:55.637247741 +0000 UTC m=+27.528551999 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:55 np0005539552 podman[142517]: 2025-11-29 07:28:55.772501548 +0000 UTC m=+0.031023025 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:55.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:56 np0005539552 podman[142517]: 2025-11-29 07:28:56.170983352 +0000 UTC m=+0.429504799 container create 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:28:56 np0005539552 python3[142254]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:56 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:28:56 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:28:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:28:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:57.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:28:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:59.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 handle_timecheck drop unexpected msg
Nov 29 02:29:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:00 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:29:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:01.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:02.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:04.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:04 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:29:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:29:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:07.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:08 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:29:08 np0005539552 ceph-mon[77121]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Nov 29 02:29:08 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:29:08 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:29:08 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:29:08 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:29:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:29:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:08.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:29:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:09.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:10 np0005539552 python3.9[142764]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:10.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:11.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:12 np0005539552 python3.9[142919]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:12.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:12 np0005539552 python3.9[142996]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:13.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:15 np0005539552 python3.9[143148]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401353.051659-1305-256246673531153/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:15.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:16 np0005539552 python3.9[143224]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:29:16 np0005539552 systemd[1]: Reloading.
Nov 29 02:29:16 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:16 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:17 np0005539552 python3.9[143337]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:29:17 np0005539552 systemd[1]: Reloading.
Nov 29 02:29:17 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:17 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:17 np0005539552 ceph-mon[77121]: Health check failed: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops. (SLOW_OPS)
Nov 29 02:29:17 np0005539552 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 02:29:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:18 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:29:18 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a2c30a2cec576b531115aa01ea3ea16a47f0c33ef4072a03af94219f0f15f13/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:18 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a2c30a2cec576b531115aa01ea3ea16a47f0c33ef4072a03af94219f0f15f13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:18 np0005539552 systemd[1]: Started /usr/bin/podman healthcheck run 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad.
Nov 29 02:29:18 np0005539552 podman[143378]: 2025-11-29 07:29:18.426416801 +0000 UTC m=+0.742450414 container init 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + sudo -E kolla_set_configs
Nov 29 02:29:18 np0005539552 podman[143378]: 2025-11-29 07:29:18.463351294 +0000 UTC m=+0.779384897 container start 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:29:18 np0005539552 edpm-start-podman-container[143378]: ovn_metadata_agent
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Validating config file
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Copying service configuration files
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Writing out command to execute
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: ++ cat /run_command
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + CMD=neutron-ovn-metadata-agent
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + ARGS=
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + sudo kolla_copy_cacerts
Nov 29 02:29:18 np0005539552 podman[143402]: 2025-11-29 07:29:18.530351266 +0000 UTC m=+0.055527194 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 02:29:18 np0005539552 edpm-start-podman-container[143377]: Creating additional drop-in dependency for "ovn_metadata_agent" (44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad)
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + [[ ! -n '' ]]
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + . kolla_extend_start
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + umask 0022
Nov 29 02:29:18 np0005539552 ovn_metadata_agent[143394]: + exec neutron-ovn-metadata-agent
Nov 29 02:29:18 np0005539552 systemd[1]: Reloading.
Nov 29 02:29:18 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:18 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:18.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:18 np0005539552 ceph-mon[77121]: Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops.)
Nov 29 02:29:18 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:29:18 np0005539552 systemd[1]: Started ovn_metadata_agent container.
Nov 29 02:29:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:20 np0005539552 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 02:29:20 np0005539552 systemd[1]: session-48.scope: Consumed 59.650s CPU time.
Nov 29 02:29:20 np0005539552 systemd-logind[788]: Session 48 logged out. Waiting for processes to exit.
Nov 29 02:29:20 np0005539552 systemd-logind[788]: Removed session 48.
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.534 143400 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.534 143400 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.534 143400 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.535 143400 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.536 143400 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.537 143400 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.538 143400 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.539 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.540 143400 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.541 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.542 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.543 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.544 143400 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.545 143400 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.546 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.547 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.548 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.549 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.550 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.551 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.552 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.553 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.554 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.554 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.554 143400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.554 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.554 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.554 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.555 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.556 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.557 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.558 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.559 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.560 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.561 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.562 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.563 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.564 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.565 143400 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.566 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.567 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.568 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.569 143400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.570 143400 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.579 143400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.579 143400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.579 143400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.580 143400 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.580 143400 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.594 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 479f969f-dbf7-4938-8979-b8532eb113f6 (UUID: 479f969f-dbf7-4938-8979-b8532eb113f6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.615 143400 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.615 143400 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.615 143400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.615 143400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.619 143400 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.625 143400 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.630 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '479f969f-dbf7-4938-8979-b8532eb113f6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], external_ids={}, name=479f969f-dbf7-4938-8979-b8532eb113f6, nb_cfg_timestamp=1764401244773, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.632 143400 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fddc7007f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.633 143400 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.638 143400 DEBUG oslo_service.service [-] Started child 143505 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.641 143505 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-164831'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.642 143400 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp1mvwajb7/privsep.sock']#033[00m
Nov 29 02:29:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.664 143505 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.664 143505 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.665 143505 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.667 143505 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.674 143505 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:20.681 143505 INFO eventlet.wsgi.server [-] (143505) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 02:29:21 np0005539552 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.321 143400 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.323 143400 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1mvwajb7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.188 143510 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.195 143510 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.199 143510 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.200 143510 INFO oslo.privsep.daemon [-] privsep daemon running as pid 143510#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.327 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[770e6785-043d-4d13-aadd-7133463ba411]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.886 143510 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.887 143510 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:29:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:21.887 143510 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:29:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:22.434 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[269c659c-d46c-46a4-9051-62e2691694d4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:29:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:22.446 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, column=external_ids, values=({'neutron:ovn-metadata-id': 'ec4007eb-bf76-5d15-8bf6-b7e5f324995c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:29:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:24 np0005539552 podman[143647]: 2025-11-29 07:29:24.04689442 +0000 UTC m=+0.119778251 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:29:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:25.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:27.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:29:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:29.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:30.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:31.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:32.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:33.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:34.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:35.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:36.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:29:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:37.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:38.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:39.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:40.002 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:29:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:29:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:40.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:41.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:42.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.668 143400 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.668 143400 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.668 143400 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.668 143400 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.668 143400 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.669 143400 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.669 143400 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.669 143400 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.669 143400 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.669 143400 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.670 143400 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.671 143400 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.671 143400 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.671 143400 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.671 143400 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.671 143400 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.671 143400 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.672 143400 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.673 143400 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.674 143400 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.674 143400 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.674 143400 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.674 143400 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.674 143400 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.675 143400 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.676 143400 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.677 143400 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.678 143400 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.679 143400 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.680 143400 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.681 143400 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.682 143400 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.683 143400 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.683 143400 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.683 143400 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.683 143400 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.683 143400 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.684 143400 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.685 143400 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.686 143400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.687 143400 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.688 143400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.689 143400 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.690 143400 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.691 143400 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.692 143400 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.693 143400 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.694 143400 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.695 143400 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.696 143400 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.697 143400 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.698 143400 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.699 143400 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.700 143400 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.701 143400 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.702 143400 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.703 143400 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.703 143400 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.703 143400 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.703 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.703 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.703 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.704 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.704 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.704 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.704 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.704 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.704 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.705 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.705 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.705 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.705 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.705 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.705 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.706 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.707 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:29:43.708 143400 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:29:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:43.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:44.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:45 np0005539552 systemd-logind[788]: New session 49 of user zuul.
Nov 29 02:29:45 np0005539552 systemd[1]: Started Session 49 of User zuul.
Nov 29 02:29:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:45.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:46 np0005539552 python3.9[143938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:29:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:46.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:47.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:48.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:48 np0005539552 podman[144071]: 2025-11-29 07:29:48.9821196 +0000 UTC m=+0.065154863 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:29:49 np0005539552 python3.9[144165]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:49.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:52.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:54 np0005539552 python3.9[144332]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:29:54 np0005539552 systemd[1]: Reloading.
Nov 29 02:29:54 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:54 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:54 np0005539552 podman[144335]: 2025-11-29 07:29:54.279917824 +0000 UTC m=+0.147619290 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:29:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:54.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:55 np0005539552 python3.9[144545]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:29:55 np0005539552 network[144562]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:29:55 np0005539552 network[144563]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:29:55 np0005539552 network[144564]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:29:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:55.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:56.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:57.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:58.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:29:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:00.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:30:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:01 np0005539552 python3.9[144829]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:01.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:02 np0005539552 python3.9[144982]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:02.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:02 np0005539552 python3.9[145136]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:03 np0005539552 python3.9[145289]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:03.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:04 np0005539552 python3.9[145442]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:04.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:05 np0005539552 python3.9[145596]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:06 np0005539552 python3.9[145749]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:06.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:07 np0005539552 python3.9[145953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:07 np0005539552 python3.9[146105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:08 np0005539552 python3.9[146257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:08.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:09 np0005539552 python3.9[146410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:09 np0005539552 python3.9[146562]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:10 np0005539552 python3.9[146715]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:11 np0005539552 python3.9[146867]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:12 np0005539552 python3.9[147020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:13 np0005539552 python3.9[147172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:13.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:14 np0005539552 python3.9[147324]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:14 np0005539552 python3.9[147479]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:15 np0005539552 python3.9[147631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:15.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:16 np0005539552 python3.9[147783]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:16 np0005539552 python3.9[147936]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:17 np0005539552 python3.9[148088]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:17.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:18 np0005539552 python3.9[148241]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:30:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:18.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:19 np0005539552 podman[148365]: 2025-11-29 07:30:19.509046241 +0000 UTC m=+0.160608161 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:30:19 np0005539552 python3.9[148399]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:30:19 np0005539552 systemd[1]: Reloading.
Nov 29 02:30:19 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:19 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:19.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:30:20.573 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:30:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:30:20.573 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:30:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:30:20.573 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:30:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:20 np0005539552 python3.9[148600]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:21 np0005539552 python3.9[148753]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:21.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:22 np0005539552 python3.9[148906]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:23 np0005539552 python3.9[149060]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:23 np0005539552 python3.9[149213]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:23.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:24 np0005539552 python3.9[149366]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:24 np0005539552 podman[149369]: 2025-11-29 07:30:24.554897537 +0000 UTC m=+0.093705110 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:30:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:24.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:25 np0005539552 python3.9[149546]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:25.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:26 np0005539552 python3.9[149749]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 02:30:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:26.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:27 np0005539552 python3.9[149903]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:30:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:27.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:28 np0005539552 python3.9[150061]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:30:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:29.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:30.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:30 np0005539552 python3.9[150223]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:30:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:31 np0005539552 python3.9[150307]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:30:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:36.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:38.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:38.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:40.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:42.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:44.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:30:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:44 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:30:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:48.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:50 np0005539552 podman[150509]: 2025-11-29 07:30:50.015958614 +0000 UTC m=+0.106391043 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:30:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:50.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:52.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:54.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:30:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:55 np0005539552 podman[150531]: 2025-11-29 07:30:55.276913678 +0000 UTC m=+0.211591252 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:30:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:56.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:30:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:58.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:00.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:04.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:08.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:10.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:14.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:16.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:18.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:31:20.574 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:31:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:31:20.575 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:31:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:31:20.575 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:31:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:20.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:21 np0005539552 podman[150790]: 2025-11-29 07:31:21.027899851 +0000 UTC m=+0.086591710 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 02:31:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:22.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:24.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:26 np0005539552 podman[150866]: 2025-11-29 07:31:26.088699581 +0000 UTC m=+0.117157143 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 02:31:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:26.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:38.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:40.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:40.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:42.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:42.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:44.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:46.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:48 np0005539552 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:31:48 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:31:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:50 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 02:31:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:50.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:52 np0005539552 podman[151019]: 2025-11-29 07:31:52.012277471 +0000 UTC m=+0.072372068 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:31:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:52 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:31:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:54.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:56.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:56 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:31:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:57 np0005539552 podman[151041]: 2025-11-29 07:31:57.029324455 +0000 UTC m=+0.108158640 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 02:31:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:58.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:31:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:00.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:00 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:32:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:32:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:01 np0005539552 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:32:01 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:32:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:02.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:03 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:32:03 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:32:03 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:32:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:32:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:32:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:06.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:08.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:10 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 02:32:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:10.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:12.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:14.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:19.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:20.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:32:20.576 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:32:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:32:20.577 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:32:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:32:20.577 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:32:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:22.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:23 np0005539552 podman[156472]: 2025-11-29 07:32:23.004494661 +0000 UTC m=+0.063604362 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:32:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:24.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:32:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:32:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:28 np0005539552 podman[159319]: 2025-11-29 07:32:28.049944394 +0000 UTC m=+0.138506848 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:32:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:31.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:33.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:36.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:40 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:32:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:44.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:32:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:50.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 29 02:32:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:52.959883) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:32:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 29 02:32:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401572960553, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2400, "num_deletes": 250, "total_data_size": 6223872, "memory_usage": 6317040, "flush_reason": "Manual Compaction"}
Nov 29 02:32:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401573006566, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4036729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10750, "largest_seqno": 13145, "table_properties": {"data_size": 4026709, "index_size": 6386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20264, "raw_average_key_size": 19, "raw_value_size": 4006458, "raw_average_value_size": 3927, "num_data_blocks": 283, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401182, "oldest_key_time": 1764401182, "file_creation_time": 1764401572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 46755 microseconds, and 9223 cpu microseconds.
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.006648) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4036729 bytes OK
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.006670) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.008490) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.008505) EVENT_LOG_v1 {"time_micros": 1764401573008500, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.008522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6213242, prev total WAL file size 6213242, number of live WAL files 2.
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.010005) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3942KB)], [21(9360KB)]
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401573010045, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 13621686, "oldest_snapshot_seqno": -1}
Nov 29 02:32:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4520 keys, 13069317 bytes, temperature: kUnknown
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401573097520, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 13069317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13031702, "index_size": 25217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111819, "raw_average_key_size": 24, "raw_value_size": 12942679, "raw_average_value_size": 2863, "num_data_blocks": 1079, "num_entries": 4520, "num_filter_entries": 4520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764401573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.097913) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13069317 bytes
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.099422) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.3 rd, 149.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.2) OK, records in: 5040, records dropped: 520 output_compression: NoCompression
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.099485) EVENT_LOG_v1 {"time_micros": 1764401573099436, "job": 10, "event": "compaction_finished", "compaction_time_micros": 87712, "compaction_time_cpu_micros": 28770, "output_level": 6, "num_output_files": 1, "total_output_size": 13069317, "num_input_records": 5040, "num_output_records": 4520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401573100580, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401573102280, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.009959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.102381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.102385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.102386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.102388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:32:53.102389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:32:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:54 np0005539552 podman[168281]: 2025-11-29 07:32:54.021936898 +0000 UTC m=+0.091123886 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:32:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:57.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:58.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:59 np0005539552 podman[168304]: 2025-11-29 07:32:59.039949208 +0000 UTC m=+0.132444434 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:32:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:32:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:00 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:33:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:01.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:01 np0005539552 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:33:01 np0005539552 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:33:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:02.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 503..1238) lease_timeout -- calling new election
Nov 29 02:33:02 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:33:02 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(40) init, last seen epoch 40
Nov 29 02:33:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:03 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:33:03 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 02:33:03 np0005539552 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Nov 29 02:33:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:04.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.512781) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401584512876, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 273, "num_deletes": 250, "total_data_size": 71358, "memory_usage": 78168, "flush_reason": "Manual Compaction"}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 29 02:33:04 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401584691747, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 47119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13147, "largest_seqno": 13418, "table_properties": {"data_size": 45261, "index_size": 87, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 3829, "raw_average_key_size": 14, "raw_value_size": 41670, "raw_average_value_size": 156, "num_data_blocks": 4, "num_entries": 267, "num_filter_entries": 267, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401574, "oldest_key_time": 1764401574, "file_creation_time": 1764401584, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 179034 microseconds, and 1577 cpu microseconds.
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.691822) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 47119 bytes OK
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.691847) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.726586) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.726676) EVENT_LOG_v1 {"time_micros": 1764401584726665, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.726731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 69281, prev total WAL file size 69281, number of live WAL files 2.
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.727273) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(46KB)], [24(12MB)]
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401584727316, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 13116436, "oldest_snapshot_seqno": -1}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4527 keys, 13110864 bytes, temperature: kUnknown
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401584878310, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 13110864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13073201, "index_size": 25265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111986, "raw_average_key_size": 24, "raw_value_size": 12984037, "raw_average_value_size": 2868, "num_data_blocks": 1081, "num_entries": 4527, "num_filter_entries": 4527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764401584, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.878612) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13110864 bytes
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.883889) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.8 rd, 86.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 12.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(556.6) write-amplify(278.3) OK, records in: 4787, records dropped: 260 output_compression: NoCompression
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.883941) EVENT_LOG_v1 {"time_micros": 1764401584883919, "job": 12, "event": "compaction_finished", "compaction_time_micros": 151101, "compaction_time_cpu_micros": 41047, "output_level": 6, "num_output_files": 1, "total_output_size": 13110864, "num_input_records": 4787, "num_output_records": 4527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401584884213, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401584886796, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.727210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.886831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.886835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.886836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.886838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:04.886840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:06.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:08.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:08 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:33:08 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:33:08 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:08 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:33:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:10.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:12.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:13.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:13 np0005539552 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 02:33:13 np0005539552 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 02:33:13 np0005539552 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 02:33:13 np0005539552 systemd[1]: sshd.service: Consumed 3.502s CPU time, read 564.0K from disk, written 88.0K to disk.
Nov 29 02:33:13 np0005539552 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 02:33:13 np0005539552 systemd[1]: Stopping sshd-keygen.target...
Nov 29 02:33:13 np0005539552 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:33:13 np0005539552 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:33:13 np0005539552 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:33:13 np0005539552 systemd[1]: Reached target sshd-keygen.target.
Nov 29 02:33:13 np0005539552 systemd[1]: Starting OpenSSH server daemon...
Nov 29 02:33:13 np0005539552 systemd[1]: Started OpenSSH server daemon.
Nov 29 02:33:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:15 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:33:15 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:33:15 np0005539552 systemd[1]: Reloading.
Nov 29 02:33:15 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:33:15 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:33:15 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:33:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:16.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:18.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:20.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:33:20.577 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:33:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:33:20.579 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:33:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:33:20.579 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:33:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:23.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:24.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:24 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:33:24 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:33:24 np0005539552 systemd[1]: man-db-cache-update.service: Consumed 11.061s CPU time.
Nov 29 02:33:24 np0005539552 systemd[1]: run-rd842aa02ee7142128967f4a244d82dec.service: Deactivated successfully.
Nov 29 02:33:24 np0005539552 podman[177936]: 2025-11-29 07:33:24.753696033 +0000 UTC m=+0.060525169 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:33:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:33:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:25.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:33:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:26.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:27.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:28.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:29.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:29 np0005539552 podman[177957]: 2025-11-29 07:33:29.99408999 +0000 UTC m=+0.080049177 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 02:33:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:30.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:31.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:32.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:32 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:33:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:33.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:34.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:36.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:33:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 503..1253) lease_timeout -- calling new election
Nov 29 02:33:37 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:33:37 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(44) init, last seen epoch 44
Nov 29 02:33:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:38.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:39.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:40.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:40 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:33:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:41.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.411832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622411933, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 420, "num_deletes": 251, "total_data_size": 466408, "memory_usage": 475864, "flush_reason": "Manual Compaction"}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622416851, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 293355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13424, "largest_seqno": 13838, "table_properties": {"data_size": 290968, "index_size": 487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5882, "raw_average_key_size": 18, "raw_value_size": 286135, "raw_average_value_size": 891, "num_data_blocks": 22, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401585, "oldest_key_time": 1764401585, "file_creation_time": 1764401622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 5065 microseconds, and 2228 cpu microseconds.
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.416904) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 293355 bytes OK
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.416928) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.418461) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.418491) EVENT_LOG_v1 {"time_micros": 1764401622418483, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.418510) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 463757, prev total WAL file size 463757, number of live WAL files 2.
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.419115) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(286KB)], [27(12MB)]
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622419154, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 13404219, "oldest_snapshot_seqno": -1}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4329 keys, 10268345 bytes, temperature: kUnknown
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622498567, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 10268345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10234626, "index_size": 21769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 108890, "raw_average_key_size": 25, "raw_value_size": 10151511, "raw_average_value_size": 2345, "num_data_blocks": 916, "num_entries": 4329, "num_filter_entries": 4329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764401622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.498882) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 10268345 bytes
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.500150) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.5 rd, 129.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(80.7) write-amplify(35.0) OK, records in: 4848, records dropped: 519 output_compression: NoCompression
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.500229) EVENT_LOG_v1 {"time_micros": 1764401622500165, "job": 14, "event": "compaction_finished", "compaction_time_micros": 79543, "compaction_time_cpu_micros": 34173, "output_level": 6, "num_output_files": 1, "total_output_size": 10268345, "num_input_records": 4848, "num_output_records": 4329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622501118, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622504028, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.419034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.504118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.504125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.504127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.504128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:33:42.504130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:43 np0005539552 ceph-mon[77121]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:33:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:46.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:49.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:50.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:51.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:52.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:54.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:54 np0005539552 podman[178282]: 2025-11-29 07:33:54.967147288 +0000 UTC m=+0.057243429 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:33:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:55.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:56 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:33:56 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(49) init, last seen epoch 49, mid-election, bumping
Nov 29 02:33:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:57.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:33:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:59.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:01 np0005539552 podman[178305]: 2025-11-29 07:34:01.024597878 +0000 UTC m=+0.113637295 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:34:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:03.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:04.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:04 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:34:04 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:34:04 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:34:04 np0005539552 ceph-mon[77121]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:34:04 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:34:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:05.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:06.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:10.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:11 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:34:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:12.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:14.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:15.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:16.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:16 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:34:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:17.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:18.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:19.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:20.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:20 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:34:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:34:20.579 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:34:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:34:20.580 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:34:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:34:20.580 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:34:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:34:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:21.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:22.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:24.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:24 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:34:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:25.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:25 np0005539552 podman[178393]: 2025-11-29 07:34:25.9545345 +0000 UTC m=+0.049312095 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 29 02:34:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:34:27 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:34:27 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:34:27 np0005539552 ceph-mon[77121]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:34:27 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:34:27 np0005539552 ceph-mon[77121]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:34:27 np0005539552 ceph-mon[77121]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:34:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:31 np0005539552 podman[178440]: 2025-11-29 07:34:31.435385145 +0000 UTC m=+0.079382991 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:34:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:31.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:37.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:39 np0005539552 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 02:34:39 np0005539552 systemd[1]: session-49.scope: Consumed 2min 9.014s CPU time.
Nov 29 02:34:39 np0005539552 systemd-logind[788]: Session 49 logged out. Waiting for processes to exit.
Nov 29 02:34:39 np0005539552 systemd-logind[788]: Removed session 49.
Nov 29 02:34:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:39.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:40.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:41.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:42.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:44.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:47.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:48.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:49.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:50.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:51.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:53.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:54.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:34:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:55.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:56 np0005539552 podman[178806]: 2025-11-29 07:34:56.985943914 +0000 UTC m=+0.069919508 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:34:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:57.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:58.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:34:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:34:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:34:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:34:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:59.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:00.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:02 np0005539552 podman[178828]: 2025-11-29 07:35:02.02833701 +0000 UTC m=+0.113103161 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:35:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:02.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:07.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:08.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:35:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 2405 writes, 14K keys, 2405 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 2405 writes, 2405 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 886 writes, 4552 keys, 886 commit groups, 1.0 writes per commit group, ingest: 9.99 MB, 0.02 MB/s#012Interval WAL: 886 writes, 886 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.8      1.07              0.05         7    0.154       0      0       0.0       0.0#012  L6      1/0    9.79 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7     57.7     51.7      1.22              0.18         6    0.203     27K   2543       0.0       0.0#012 Sum      1/0    9.79 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7     30.6     34.9      2.29              0.22        13    0.176     27K   2543       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   8.6     47.8     48.3      1.03              0.15         8    0.129     19K   1821       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     57.7     51.7      1.22              0.18         6    0.203     27K   2543       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.5      1.03              0.05         6    0.172       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.017, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.07 MB/s write, 0.07 GB read, 0.06 MB/s read, 2.3 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 2.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(98,1.83 MB,0.601056%) FilterBlock(13,84.11 KB,0.0270191%) IndexBlock(13,181.64 KB,0.0583498%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:35:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:10 np0005539552 ceph-mon[77121]: Health check failed: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:11.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:13.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:14.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:16 np0005539552 ceph-mon[77121]: Health check cleared: SLOW_OPS (was: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops)
Nov 29 02:35:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:18 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:35:18 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(55) init, last seen epoch 55, mid-election, bumping
Nov 29 02:35:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:19.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:35:20.581 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:35:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:35:20.581 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:35:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:35:20.581 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:35:20 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:35:20 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:35:20 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:20 np0005539552 ceph-mon[77121]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:35:20 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:35:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:21.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:24 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:35:24 np0005539552 ceph-mon[77121]: Health check failed: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:26.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:27 np0005539552 ceph-mon[77121]: Health check cleared: SLOW_OPS (was: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops)
Nov 29 02:35:27 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:35:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:28 np0005539552 podman[178969]: 2025-11-29 07:35:28.01074664 +0000 UTC m=+0.084899398 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:35:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:28.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:31.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:33 np0005539552 podman[179042]: 2025-11-29 07:35:33.002437411 +0000 UTC m=+0.096690752 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 02:35:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:33.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:34.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:36.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:37.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:38.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:39.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:40.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:41.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:42.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:43.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:44.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:45.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:47.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:48.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:49.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:35:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 5586 writes, 23K keys, 5586 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5586 writes, 914 syncs, 6.11 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 374 writes, 595 keys, 374 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s#012Interval WAL: 374 writes, 167 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 29 02:35:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:51.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:52.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:54.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:57.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:35:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:58.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:58 np0005539552 podman[179131]: 2025-11-29 07:35:58.983471935 +0000 UTC m=+0.072392905 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 02:36:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:00.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:02.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:03 np0005539552 podman[179153]: 2025-11-29 07:36:03.974995203 +0000 UTC m=+0.069753424 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:36:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:04.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:04.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:06.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:06.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:08.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:08.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:10.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:36:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:36:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:12.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:12.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:14.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:14.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539552 systemd-logind[788]: New session 50 of user zuul.
Nov 29 02:36:14 np0005539552 systemd[1]: Started Session 50 of User zuul.
Nov 29 02:36:15 np0005539552 python3.9[179498]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:15 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:15 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:15 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:16.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539552 python3.9[179690]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:16 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:16 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:16 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:17 np0005539552 python3.9[179880]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:18 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:18 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:18 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:18.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:19 np0005539552 python3.9[180120]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:19 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:19 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:19 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:20.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:36:20.582 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:36:20.583 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:36:20.583 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:21 np0005539552 python3.9[180312]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:21 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:22 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:22 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:22.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:22.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:23 np0005539552 python3.9[180503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:23 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:23 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:23 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:24.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:24 np0005539552 python3.9[180693]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:24.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:24 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:24 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:24 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:25 np0005539552 python3.9[180883]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:26.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:26 np0005539552 python3.9[181038]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:26 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:26 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:26 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:28.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:28 np0005539552 python3.9[181230]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:28 np0005539552 systemd[1]: Reloading.
Nov 29 02:36:29 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:29 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:29 np0005539552 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 02:36:29 np0005539552 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 02:36:29 np0005539552 podman[181270]: 2025-11-29 07:36:29.325550692 +0000 UTC m=+0.076445963 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:36:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:30.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:30 np0005539552 python3.9[181443]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:32 np0005539552 python3.9[181599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:32.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:33 np0005539552 python3.9[181805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:34 np0005539552 python3.9[181960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:34 np0005539552 podman[181962]: 2025-11-29 07:36:34.186193424 +0000 UTC m=+0.114532359 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 02:36:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:34.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:34.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:34 np0005539552 python3.9[182143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:35 np0005539552 python3.9[182298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:36.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:36.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:36 np0005539552 python3.9[182453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:37 np0005539552 python3.9[182609]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:38.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:38.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:38 np0005539552 python3.9[182764]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:39 np0005539552 python3.9[182920]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:40.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:40 np0005539552 python3.9[183075]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:40.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:41 np0005539552 python3.9[183231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:42 np0005539552 python3.9[183386]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:42.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:42.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:43 np0005539552 python3.9[183542]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:44 np0005539552 python3.9[183697]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:44.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:44.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:44 np0005539552 python3.9[183850]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:45 np0005539552 python3.9[184002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:46.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:46 np0005539552 python3.9[184154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:46.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:47 np0005539552 python3.9[184307]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:47 np0005539552 python3.9[184459]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:48.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:36:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:48.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:36:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:50.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:51 np0005539552 python3.9[184613]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:52 np0005539552 python3.9[184761]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401811.0105712-1636-181088394238886/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:53 np0005539552 python3.9[184941]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:53 np0005539552 python3.9[185066]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401812.728481-1636-93715430698077/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:54.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:54.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:54 np0005539552 python3.9[185218]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:55 np0005539552 python3.9[185344]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401813.9480822-1636-170140443404660/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:56.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:56 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:36:56 np0005539552 python3.9[185497]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:57 np0005539552 python3.9[185622]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401816.154184-1636-209786144384892/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:58 np0005539552 python3.9[185774]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:58.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:36:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:58.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:58 np0005539552 python3.9[185900]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401817.4957857-1636-192088097786084/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:59 np0005539552 python3.9[186052]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:59 np0005539552 podman[186055]: 2025-11-29 07:36:59.540913826 +0000 UTC m=+0.064721832 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:36:59 np0005539552 python3.9[186196]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401818.899144-1636-258087930130413/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:00.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:00 np0005539552 python3.9[186348]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:01 np0005539552 python3.9[186472]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401820.1176732-1636-222347946487406/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:01 np0005539552 python3.9[186624]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:02.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:02 np0005539552 python3.9[186749]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401821.2928615-1636-114077036058096/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:02.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:04 np0005539552 podman[186874]: 2025-11-29 07:37:04.343554117 +0000 UTC m=+0.079237654 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:37:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:04.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:04 np0005539552 python3.9[186923]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 02:37:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:06.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:06.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:07 np0005539552 python3.9[187084]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:07 np0005539552 python3.9[187236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000052s ======
Nov 29 02:37:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 29 02:37:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:08.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:08 np0005539552 python3.9[187389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:09 np0005539552 python3.9[187541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:09 np0005539552 python3.9[187693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:10.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:10.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:10 np0005539552 python3.9[187846]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:11 np0005539552 python3.9[187998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:12 np0005539552 python3.9[188150]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:12.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:12 np0005539552 python3.9[188326]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:13 np0005539552 python3.9[188505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:13 np0005539552 python3.9[188657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:14.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:14.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:14 np0005539552 python3.9[188809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.770376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834770489, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1940, "num_deletes": 252, "total_data_size": 5422009, "memory_usage": 5498992, "flush_reason": "Manual Compaction"}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834785383, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2346669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13843, "largest_seqno": 15778, "table_properties": {"data_size": 2340477, "index_size": 3135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16348, "raw_average_key_size": 21, "raw_value_size": 2326657, "raw_average_value_size": 3021, "num_data_blocks": 140, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401623, "oldest_key_time": 1764401623, "file_creation_time": 1764401834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15113 microseconds, and 6512 cpu microseconds.
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.785495) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2346669 bytes OK
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.785519) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.786864) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.786883) EVENT_LOG_v1 {"time_micros": 1764401834786877, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.786903) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5413265, prev total WAL file size 5413265, number of live WAL files 2.
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.788374) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2291KB)], [30(10027KB)]
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834788420, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 12615014, "oldest_snapshot_seqno": -1}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4645 keys, 9906978 bytes, temperature: kUnknown
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834845524, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9906978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9873448, "index_size": 20812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116205, "raw_average_key_size": 25, "raw_value_size": 9786956, "raw_average_value_size": 2106, "num_data_blocks": 876, "num_entries": 4645, "num_filter_entries": 4645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764401834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.845931) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9906978 bytes
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.847691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.3 rd, 173.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.6) write-amplify(4.2) OK, records in: 5099, records dropped: 454 output_compression: NoCompression
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.847719) EVENT_LOG_v1 {"time_micros": 1764401834847704, "job": 16, "event": "compaction_finished", "compaction_time_micros": 57256, "compaction_time_cpu_micros": 21907, "output_level": 6, "num_output_files": 1, "total_output_size": 9906978, "num_input_records": 5099, "num_output_records": 4645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834848547, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834851093, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.788282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.851202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.851210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.851212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.851214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:37:14.851217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:15 np0005539552 python3.9[188962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:15 np0005539552 python3.9[189114]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:16.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:16.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:17 np0005539552 python3.9[189267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:17 np0005539552 python3.9[189390]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401836.623859-2298-31989813588024/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:18.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:18 np0005539552 python3.9[189642]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:37:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:37:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:18 np0005539552 podman[189766]: 2025-11-29 07:37:18.707094702 +0000 UTC m=+0.072254201 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:37:18 np0005539552 podman[189766]: 2025-11-29 07:37:18.79831527 +0000 UTC m=+0.163474709 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:37:18 np0005539552 python3.9[189856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401837.911483-2298-85013598813444/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:19 np0005539552 podman[190091]: 2025-11-29 07:37:19.386842621 +0000 UTC m=+0.060922903 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:37:19 np0005539552 podman[190091]: 2025-11-29 07:37:19.396956387 +0000 UTC m=+0.071036649 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:37:19 np0005539552 podman[190202]: 2025-11-29 07:37:19.623238465 +0000 UTC m=+0.053265761 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Nov 29 02:37:19 np0005539552 python3.9[190171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:19 np0005539552 podman[190202]: 2025-11-29 07:37:19.635058136 +0000 UTC m=+0.065085422 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, version=2.2.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 02:37:20 np0005539552 python3.9[190358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401839.1242857-2298-62638782720087/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:37:20.583 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:37:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:37:20.584 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:37:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:37:20.585 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:37:20 np0005539552 python3.9[190611]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:37:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:37:21 np0005539552 python3.9[190766]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401840.363979-2298-171785759506475/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:21 np0005539552 python3.9[190918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:22 np0005539552 python3.9[191041]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401841.4533691-2298-78648784238909/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:37:23 np0005539552 python3.9[191194]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:23 np0005539552 python3.9[191317]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401842.708652-2298-185286999827276/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:24 np0005539552 python3.9[191469]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539552 python3.9[191593]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401843.9508355-2298-119778577493623/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:25 np0005539552 auditd[702]: Audit daemon rotating log files
Nov 29 02:37:25 np0005539552 python3.9[191745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:26 np0005539552 python3.9[191868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401845.1244912-2298-62493336081481/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:26.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:26 np0005539552 python3.9[192021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:27 np0005539552 python3.9[192144]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401846.3401618-2298-199380040274046/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:28 np0005539552 python3.9[192296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:28.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:28 np0005539552 python3.9[192419]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401847.5610523-2298-50791257802129/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:29 np0005539552 python3.9[192572]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:29 np0005539552 podman[192667]: 2025-11-29 07:37:29.700929284 +0000 UTC m=+0.087822680 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:37:29 np0005539552 python3.9[192714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401848.7995396-2298-9764840064803/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:30 np0005539552 python3.9[192916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:30 np0005539552 python3.9[193040]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401850.0280297-2298-114672286826170/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:31 np0005539552 python3.9[193192]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:32 np0005539552 python3.9[193315]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401851.1475387-2298-218849257411048/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:32.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:32.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539552 python3.9[193472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:33 np0005539552 python3.9[193641]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401852.3172588-2298-53108099632812/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:34.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:34.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:35 np0005539552 podman[193762]: 2025-11-29 07:37:35.015683786 +0000 UTC m=+0.103357948 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 02:37:35 np0005539552 python3.9[193809]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:37:36 np0005539552 python3.9[193972]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 02:37:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:40.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:44 np0005539552 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 02:37:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:44 np0005539552 python3.9[194132]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:45 np0005539552 python3.9[194285]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:45 np0005539552 python3.9[194437]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:46.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:46 np0005539552 python3.9[194589]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:47 np0005539552 python3.9[194742]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:48 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:37:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:48.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:48 np0005539552 python3.9[194894]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:49 np0005539552 python3.9[195047]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:49 np0005539552 python3.9[195199]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:50.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:50.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:50 np0005539552 python3.9[195352]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:51 np0005539552 python3.9[195504]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:52 np0005539552 python3.9[195656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:52 np0005539552 systemd[1]: Reloading.
Nov 29 02:37:52 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:52 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:37:52 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:52.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:52.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:52 np0005539552 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 02:37:52 np0005539552 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 02:37:52 np0005539552 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 02:37:52 np0005539552 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 02:37:52 np0005539552 systemd[1]: Starting libvirt logging daemon...
Nov 29 02:37:52 np0005539552 systemd[1]: Started libvirt logging daemon.
Nov 29 02:37:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 754..1469) lease_timeout -- calling new election
Nov 29 02:37:53 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:37:53 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(58) init, last seen epoch 58
Nov 29 02:37:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:37:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:37:53 np0005539552 python3.9[195900]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:53 np0005539552 systemd[1]: Reloading.
Nov 29 02:37:53 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:53 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:53 np0005539552 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 02:37:53 np0005539552 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 02:37:53 np0005539552 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 02:37:53 np0005539552 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 02:37:53 np0005539552 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 02:37:53 np0005539552 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 02:37:53 np0005539552 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:37:53 np0005539552 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:37:54 np0005539552 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 02:37:54 np0005539552 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 02:37:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:54 np0005539552 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 02:37:54 np0005539552 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 02:37:54 np0005539552 python3.9[196117]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:54 np0005539552 systemd[1]: Reloading.
Nov 29 02:37:54 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:54 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:54 np0005539552 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 02:37:55 np0005539552 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 02:37:55 np0005539552 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 02:37:55 np0005539552 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 02:37:55 np0005539552 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:37:55 np0005539552 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:37:55 np0005539552 setroubleshoot[196041]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 086043d3-593f-4a07-a851-e754adcd099b
Nov 29 02:37:55 np0005539552 setroubleshoot[196041]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:37:55 np0005539552 setroubleshoot[196041]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 086043d3-593f-4a07-a851-e754adcd099b
Nov 29 02:37:55 np0005539552 setroubleshoot[196041]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:37:55 np0005539552 python3.9[196339]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:55 np0005539552 systemd[1]: Reloading.
Nov 29 02:37:55 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:55 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:56 np0005539552 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 02:37:56 np0005539552 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 02:37:56 np0005539552 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 02:37:56 np0005539552 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 02:37:56 np0005539552 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 02:37:56 np0005539552 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 02:37:56 np0005539552 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 02:37:56 np0005539552 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 02:37:56 np0005539552 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 02:37:56 np0005539552 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 02:37:56 np0005539552 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:37:56 np0005539552 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:37:56 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:37:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:56.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:57 np0005539552 python3.9[196555]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:57 np0005539552 systemd[1]: Reloading.
Nov 29 02:37:57 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:57 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:57 np0005539552 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 02:37:57 np0005539552 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 02:37:57 np0005539552 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 02:37:57 np0005539552 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 02:37:57 np0005539552 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 02:37:57 np0005539552 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 02:37:57 np0005539552 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:37:57 np0005539552 systemd[1]: Started libvirt secret daemon.
Nov 29 02:37:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:37:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:58.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:59 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(59) init, last seen epoch 59, mid-election, bumping
Nov 29 02:37:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:37:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 handle_timecheck drop unexpected msg
Nov 29 02:37:59 np0005539552 podman[196641]: 2025-11-29 07:37:59.965482797 +0000 UTC m=+0.058001496 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:38:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:38:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:00.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:00.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:02.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:38:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:04.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:04.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:05 np0005539552 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 02:38:05 np0005539552 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 02:38:05 np0005539552 podman[196663]: 2025-11-29 07:38:05.678038488 +0000 UTC m=+0.106575963 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:38:06 np0005539552 python3.9[196816]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:06.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:07 np0005539552 python3.9[196969]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:38:07 np0005539552 python3.9[197121]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:08.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:08.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539552 python3.9[197276]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:38:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:09 np0005539552 python3.9[197426]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:10 np0005539552 python3.9[197547]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401889.194034-3372-80196435422119/.source.xml follow=False _original_basename=secret.xml.j2 checksum=adf02dc8f6a63a8cc45a7e93e335963254ff5ce7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:10.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:11 np0005539552 python3.9[197700]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine b66774a7-56d9-5535-bd8c-681234404870#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:11 np0005539552 python3.9[197862]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:12.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:14 np0005539552 python3.9[198376]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:14.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:14.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:14 np0005539552 python3.9[198529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:15 np0005539552 python3.9[198652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401894.513106-3537-235342430528316/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:16.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:16 np0005539552 python3.9[198804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:17 np0005539552 python3.9[198957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:17 np0005539552 python3.9[199035]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:18 np0005539552 python3.9[199187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:18 np0005539552 python3.9[199266]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l_vv2g6v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:19 np0005539552 python3.9[199418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:20 np0005539552 python3.9[199496]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:20.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:20.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:38:20.584 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:38:20.585 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:38:20.585 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:21 np0005539552 python3.9[199649]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:22 np0005539552 python3[199802]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:38:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 02:38:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:22 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:22 np0005539552 python3.9[199955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:23 np0005539552 python3.9[200033]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:24 np0005539552 python3.9[200185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 02:38:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:24.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:24 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:24.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:24 np0005539552 python3.9[200263]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:25 np0005539552 python3.9[200416]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:25 np0005539552 python3.9[200494]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 02:38:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:26 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:26 np0005539552 python3.9[200646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:27 np0005539552 python3.9[200725]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:28 np0005539552 python3.9[200877]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 02:38:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:28.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:28 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:28.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:28 np0005539552 python3.9[201003]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401907.5338454-3913-127015453500955/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:30 np0005539552 podman[201052]: 2025-11-29 07:38:30.361735254 +0000 UTC m=+0.076902153 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:38:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:30.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:31 np0005539552 python3.9[201305]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:32 np0005539552 python3.9[201457]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:32.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:32 np0005539552 python3.9[201613]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:33 np0005539552 python3.9[201815]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:34 np0005539552 python3.9[201968]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:38:35 np0005539552 python3.9[202123]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:35 np0005539552 podman[202250]: 2025-11-29 07:38:35.817672567 +0000 UTC m=+0.083623459 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:38:36 np0005539552 python3.9[202299]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:36.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539552 python3.9[202457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:37 np0005539552 python3.9[202580]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401916.2479377-4129-207894336421887/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:37 np0005539552 python3.9[202732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:38 np0005539552 python3.9[202905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401917.4990907-4174-279087578479442/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:39 np0005539552 python3.9[203058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:39 np0005539552 python3.9[203181]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401918.7891579-4218-159630839795055/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:40.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:40 np0005539552 python3.9[203334]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:38:40 np0005539552 systemd[1]: Reloading.
Nov 29 02:38:40 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:40 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:41 np0005539552 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 02:38:42 np0005539552 python3.9[203526]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:38:42 np0005539552 systemd[1]: Reloading.
Nov 29 02:38:42 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:42 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:42.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:42.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:42 np0005539552 systemd[1]: Reloading.
Nov 29 02:38:42 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:42 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:43 np0005539552 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 02:38:43 np0005539552 systemd[1]: session-50.scope: Consumed 1min 31.905s CPU time.
Nov 29 02:38:43 np0005539552 systemd-logind[788]: Session 50 logged out. Waiting for processes to exit.
Nov 29 02:38:43 np0005539552 systemd-logind[788]: Removed session 50.
Nov 29 02:38:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:44.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:46.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:46.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:48.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:48.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:50 np0005539552 systemd-logind[788]: New session 51 of user zuul.
Nov 29 02:38:50 np0005539552 systemd[1]: Started Session 51 of User zuul.
Nov 29 02:38:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:50.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:50.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:51 np0005539552 python3.9[203783]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:38:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:52.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:53 np0005539552 python3.9[203938]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:38:53 np0005539552 network[203955]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:38:53 np0005539552 network[203956]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:38:53 np0005539552 network[203957]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:38:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:54.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:54.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:56.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:56.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:58 np0005539552 python3.9[204281]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:38:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:58.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:38:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:59 np0005539552 python3.9[204366]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:39:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:00.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:01 np0005539552 podman[204369]: 2025-11-29 07:39:01.000999543 +0000 UTC m=+0.084447552 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 02:39:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:02.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:02.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:06 np0005539552 podman[204466]: 2025-11-29 07:39:06.071258447 +0000 UTC m=+0.150146514 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:39:06 np0005539552 python3.9[204567]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:06.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:06.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:07 np0005539552 python3.9[204720]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:08.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:08 np0005539552 python3.9[204873]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:09 np0005539552 python3.9[205026]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:10 np0005539552 python3.9[205179]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:10.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:10 np0005539552 python3.9[205303]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401949.6937253-253-175817558135599/.source.iscsi _original_basename=.209xfq2s follow=False checksum=8fd8ae8b864fc8b475dbb1ae75da9f4662cb68e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:11 np0005539552 python3.9[205455]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:12.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:12 np0005539552 python3.9[205608]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:14 np0005539552 python3.9[205760]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:14 np0005539552 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 02:39:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:14.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:15 np0005539552 python3.9[205967]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:15 np0005539552 systemd[1]: Reloading.
Nov 29 02:39:15 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:15 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:15 np0005539552 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:39:15 np0005539552 systemd[1]: Starting Open-iSCSI...
Nov 29 02:39:15 np0005539552 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 02:39:15 np0005539552 systemd[1]: Started Open-iSCSI.
Nov 29 02:39:15 np0005539552 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 02:39:15 np0005539552 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 02:39:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:16.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:17 np0005539552 python3.9[206169]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:39:17 np0005539552 network[206186]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:39:17 np0005539552 network[206187]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:39:17 np0005539552 network[206188]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:39:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:18.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:20.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:20.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:39:20.585 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:39:20.586 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:39:20.586 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:22 np0005539552 python3.9[206462]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:39:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:22.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:22.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:23 np0005539552 python3.9[206615]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 02:39:23 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:39:23 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:39:23 np0005539552 python3.9[206772]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:24.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539552 python3.9[206895]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401963.4721088-483-261899335783786/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:25 np0005539552 python3.9[207048]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:26.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539552 python3.9[207200]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:39:26 np0005539552 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:39:26 np0005539552 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:39:26 np0005539552 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:39:26 np0005539552 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:39:26 np0005539552 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:39:27 np0005539552 python3.9[207357]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:28 np0005539552 python3.9[207509]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:28.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:28.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:29 np0005539552 python3.9[207662]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:30.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:30 np0005539552 python3.9[207814]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:30.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:31 np0005539552 python3.9[207938]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401969.9891121-657-224519591363405/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:31 np0005539552 podman[208062]: 2025-11-29 07:39:31.74445683 +0000 UTC m=+0.075371650 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:39:31 np0005539552 python3.9[208107]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:32.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:32 np0005539552 python3.9[208261]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:33 np0005539552 python3.9[208413]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:34 np0005539552 python3.9[208570]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:34.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:35 np0005539552 python3.9[208768]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:35 np0005539552 python3.9[208920]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:36 np0005539552 podman[209044]: 2025-11-29 07:39:36.410407045 +0000 UTC m=+0.098239661 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:39:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:36.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:36 np0005539552 python3.9[209091]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:36.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:37 np0005539552 python3.9[209249]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:38 np0005539552 python3.9[209401]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:38 np0005539552 python3.9[209673]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:39 np0005539552 python3.9[209840]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:40.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:40.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:40 np0005539552 python3.9[209993]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:41 np0005539552 python3.9[210071]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:41 np0005539552 python3.9[210223]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:42 np0005539552 python3.9[210301]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:42.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:42.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:43 np0005539552 python3.9[210454]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539552 python3.9[210607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:45 np0005539552 python3.9[210685]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:46 np0005539552 python3.9[210837]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:46 np0005539552 python3.9[210915]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:46.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:47 np0005539552 python3.9[211068]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:47 np0005539552 systemd[1]: Reloading.
Nov 29 02:39:47 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:47 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:48.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:49 np0005539552 python3.9[211257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:49 np0005539552 python3.9[211335]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:50.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:50 np0005539552 python3.9[211537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:51 np0005539552 python3.9[211616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:51 np0005539552 python3.9[211768]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:51 np0005539552 systemd[1]: Reloading.
Nov 29 02:39:52 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:52 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:52 np0005539552 systemd[1]: Starting Create netns directory...
Nov 29 02:39:52 np0005539552 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:39:52 np0005539552 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:39:52 np0005539552 systemd[1]: Finished Create netns directory.
Nov 29 02:39:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:52.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:53 np0005539552 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 02:39:54 np0005539552 python3.9[211965]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:54.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:54.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539552 python3.9[212168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:55 np0005539552 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:39:55 np0005539552 python3.9[212292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401994.3493993-1279-212794566284590/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:56 np0005539552 python3.9[212444]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:56.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:57 np0005539552 python3.9[212597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:57 np0005539552 python3.9[212720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401996.7743537-1353-57651606334034/.source.json _original_basename=.8vn0qx9w follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:58.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:58 np0005539552 python3.9[212872]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:39:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:58.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:00.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:01 np0005539552 python3.9[213301]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 02:40:01 np0005539552 podman[213302]: 2025-11-29 07:40:01.96554173 +0000 UTC m=+0.054327959 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:40:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:02.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:03 np0005539552 python3.9[213474]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:40:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:04.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:04.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:06.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:06.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:06 np0005539552 podman[213600]: 2025-11-29 07:40:06.745355202 +0000 UTC m=+0.081651588 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:40:06 np0005539552 python3.9[213649]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:40:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:08.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:08.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:09 np0005539552 python3[213834]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:40:10 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:40:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:10.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:11 np0005539552 podman[213846]: 2025-11-29 07:40:11.563638118 +0000 UTC m=+1.531851080 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:11 np0005539552 podman[213904]: 2025-11-29 07:40:11.711412118 +0000 UTC m=+0.023936739 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:12 np0005539552 podman[213904]: 2025-11-29 07:40:12.020762595 +0000 UTC m=+0.333287176 container create 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:40:12 np0005539552 python3[213834]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:12.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:13 np0005539552 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 02:40:13 np0005539552 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 02:40:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:14.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:14.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:16.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:16.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:18.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:20.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:40:20.586 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:40:20.587 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:40:20.587 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:20.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:22.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:22.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:24 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:40:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:24.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:24.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:26.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:28.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:30.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539552 python3.9[214155]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:32.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:32.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:32 np0005539552 podman[214282]: 2025-11-29 07:40:32.851040924 +0000 UTC m=+0.078678259 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:40:33 np0005539552 python3.9[214330]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:33 np0005539552 python3.9[214406]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:34 np0005539552 python3.9[214557]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402033.5502024-1617-131885426543809/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:34.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:34 np0005539552 python3.9[214634]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:40:34 np0005539552 systemd[1]: Reloading.
Nov 29 02:40:34 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:40:34 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:40:35 np0005539552 python3.9[214795]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:40:35 np0005539552 systemd[1]: Reloading.
Nov 29 02:40:36 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:40:36 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:40:36 np0005539552 systemd[1]: Starting multipathd container...
Nov 29 02:40:36 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:40:36 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97936c37383ce87a429342cbaa8c938eb669ff41f32442b55e59fdf70ae2cf90/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:36 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97936c37383ce87a429342cbaa8c938eb669ff41f32442b55e59fdf70ae2cf90/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:36 np0005539552 systemd[1]: Started /usr/bin/podman healthcheck run 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0.
Nov 29 02:40:36 np0005539552 podman[214834]: 2025-11-29 07:40:36.584389506 +0000 UTC m=+0.349579981 container init 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:40:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:36.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:36 np0005539552 multipathd[214850]: + sudo -E kolla_set_configs
Nov 29 02:40:36 np0005539552 podman[214834]: 2025-11-29 07:40:36.615972087 +0000 UTC m=+0.381162572 container start 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:40:36 np0005539552 podman[214834]: multipathd
Nov 29 02:40:36 np0005539552 systemd[1]: Started multipathd container.
Nov 29 02:40:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:36.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:36 np0005539552 multipathd[214850]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:40:36 np0005539552 multipathd[214850]: INFO:__main__:Validating config file
Nov 29 02:40:36 np0005539552 multipathd[214850]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:40:36 np0005539552 multipathd[214850]: INFO:__main__:Writing out command to execute
Nov 29 02:40:36 np0005539552 multipathd[214850]: ++ cat /run_command
Nov 29 02:40:36 np0005539552 multipathd[214850]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:40:36 np0005539552 multipathd[214850]: + ARGS=
Nov 29 02:40:36 np0005539552 multipathd[214850]: + sudo kolla_copy_cacerts
Nov 29 02:40:36 np0005539552 podman[214857]: 2025-11-29 07:40:36.71172667 +0000 UTC m=+0.078919415 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 02:40:36 np0005539552 multipathd[214850]: + [[ ! -n '' ]]
Nov 29 02:40:36 np0005539552 multipathd[214850]: + . kolla_extend_start
Nov 29 02:40:36 np0005539552 multipathd[214850]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:40:36 np0005539552 multipathd[214850]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:40:36 np0005539552 multipathd[214850]: + umask 0022
Nov 29 02:40:36 np0005539552 multipathd[214850]: + exec /usr/sbin/multipathd -d
Nov 29 02:40:36 np0005539552 systemd[1]: 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0-4a27a85ee4c84330.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:40:36 np0005539552 systemd[1]: 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0-4a27a85ee4c84330.service: Failed with result 'exit-code'.
Nov 29 02:40:36 np0005539552 multipathd[214850]: 5082.492120 | --------start up--------
Nov 29 02:40:36 np0005539552 multipathd[214850]: 5082.492140 | read /etc/multipath.conf
Nov 29 02:40:36 np0005539552 multipathd[214850]: 5082.498601 | path checkers start up
Nov 29 02:40:37 np0005539552 podman[214914]: 2025-11-29 07:40:37.041971735 +0000 UTC m=+0.119216190 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:40:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:38.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:40.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:40.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:42.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539552 python3.9[215069]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:43 np0005539552 python3.9[215223]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:40:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:44.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:44 np0005539552 python3.9[215387]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:40:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:44.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:44 np0005539552 systemd[1]: Stopping multipathd container...
Nov 29 02:40:44 np0005539552 multipathd[214850]: 5090.562073 | exit (signal)
Nov 29 02:40:44 np0005539552 multipathd[214850]: 5090.562490 | --------shut down-------
Nov 29 02:40:44 np0005539552 systemd[1]: libpod-1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0.scope: Deactivated successfully.
Nov 29 02:40:44 np0005539552 podman[215392]: 2025-11-29 07:40:44.852786054 +0000 UTC m=+0.096410741 container died 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 02:40:44 np0005539552 systemd[1]: 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0-4a27a85ee4c84330.timer: Deactivated successfully.
Nov 29 02:40:44 np0005539552 systemd[1]: Stopped /usr/bin/podman healthcheck run 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0.
Nov 29 02:40:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0-userdata-shm.mount: Deactivated successfully.
Nov 29 02:40:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay-97936c37383ce87a429342cbaa8c938eb669ff41f32442b55e59fdf70ae2cf90-merged.mount: Deactivated successfully.
Nov 29 02:40:44 np0005539552 podman[215392]: 2025-11-29 07:40:44.91300885 +0000 UTC m=+0.156633537 container cleanup 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 02:40:44 np0005539552 podman[215392]: multipathd
Nov 29 02:40:44 np0005539552 podman[215421]: multipathd
Nov 29 02:40:44 np0005539552 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 02:40:44 np0005539552 systemd[1]: Stopped multipathd container.
Nov 29 02:40:44 np0005539552 systemd[1]: Starting multipathd container...
Nov 29 02:40:45 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:40:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97936c37383ce87a429342cbaa8c938eb669ff41f32442b55e59fdf70ae2cf90/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97936c37383ce87a429342cbaa8c938eb669ff41f32442b55e59fdf70ae2cf90/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:45 np0005539552 systemd[1]: Started /usr/bin/podman healthcheck run 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0.
Nov 29 02:40:45 np0005539552 podman[215434]: 2025-11-29 07:40:45.137557267 +0000 UTC m=+0.130547902 container init 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:40:45 np0005539552 multipathd[215450]: + sudo -E kolla_set_configs
Nov 29 02:40:45 np0005539552 podman[215434]: 2025-11-29 07:40:45.168276145 +0000 UTC m=+0.161266761 container start 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:40:45 np0005539552 podman[215434]: multipathd
Nov 29 02:40:45 np0005539552 systemd[1]: Started multipathd container.
Nov 29 02:40:45 np0005539552 podman[215456]: 2025-11-29 07:40:45.252385378 +0000 UTC m=+0.071351343 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 02:40:45 np0005539552 multipathd[215450]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:40:45 np0005539552 multipathd[215450]: INFO:__main__:Validating config file
Nov 29 02:40:45 np0005539552 multipathd[215450]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:40:45 np0005539552 multipathd[215450]: INFO:__main__:Writing out command to execute
Nov 29 02:40:45 np0005539552 systemd[1]: 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0-27da71a57cb73bee.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:40:45 np0005539552 systemd[1]: 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0-27da71a57cb73bee.service: Failed with result 'exit-code'.
Nov 29 02:40:45 np0005539552 multipathd[215450]: ++ cat /run_command
Nov 29 02:40:45 np0005539552 multipathd[215450]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:40:45 np0005539552 multipathd[215450]: + ARGS=
Nov 29 02:40:45 np0005539552 multipathd[215450]: + sudo kolla_copy_cacerts
Nov 29 02:40:45 np0005539552 multipathd[215450]: + [[ ! -n '' ]]
Nov 29 02:40:45 np0005539552 multipathd[215450]: + . kolla_extend_start
Nov 29 02:40:45 np0005539552 multipathd[215450]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:40:45 np0005539552 multipathd[215450]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:40:45 np0005539552 multipathd[215450]: + umask 0022
Nov 29 02:40:45 np0005539552 multipathd[215450]: + exec /usr/sbin/multipathd -d
Nov 29 02:40:45 np0005539552 multipathd[215450]: 5091.064298 | --------start up--------
Nov 29 02:40:45 np0005539552 multipathd[215450]: 5091.064443 | read /etc/multipath.conf
Nov 29 02:40:45 np0005539552 multipathd[215450]: 5091.070537 | path checkers start up
Nov 29 02:40:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:46.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:46.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:48.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:48.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:50.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:50.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:52.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:54.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:54.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:56.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:56.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:40:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:58 np0005539552 python3.9[215830]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:58.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:40:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:58.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:59 np0005539552 python3.9[215983]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:40:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:00 np0005539552 python3.9[216135]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 02:41:00 np0005539552 kernel: Key type psk registered
Nov 29 02:41:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:00.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:01 np0005539552 python3.9[216297]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:41:01 np0005539552 python3.9[216420]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402060.695355-1858-247051458212093/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:02 np0005539552 python3.9[216572]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:02.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:02.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:02 np0005539552 podman[216699]: 2025-11-29 07:41:02.983653618 +0000 UTC m=+0.065701903 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:41:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:03 np0005539552 python3.9[216794]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:41:03 np0005539552 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:41:03 np0005539552 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:41:03 np0005539552 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:41:03 np0005539552 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:41:03 np0005539552 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:41:04 np0005539552 python3.9[216950]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:41:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:04.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:04.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:06.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539552 systemd[1]: Reloading.
Nov 29 02:41:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:06.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:06 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:06 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:06 np0005539552 systemd[1]: Reloading.
Nov 29 02:41:07 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:07 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:07 np0005539552 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 02:41:07 np0005539552 podman[217030]: 2025-11-29 07:41:07.380090758 +0000 UTC m=+0.110876047 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 02:41:07 np0005539552 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 02:41:07 np0005539552 lvm[217090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:41:07 np0005539552 lvm[217090]: VG ceph_vg0 finished
Nov 29 02:41:07 np0005539552 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:41:07 np0005539552 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:41:07 np0005539552 systemd[1]: Reloading.
Nov 29 02:41:07 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:07 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:07 np0005539552 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:41:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:08.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:08.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:09 np0005539552 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:41:09 np0005539552 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:41:09 np0005539552 systemd[1]: man-db-cache-update.service: Consumed 1.791s CPU time.
Nov 29 02:41:09 np0005539552 systemd[1]: run-r4a8dc9a7025145d6a6ff0151e6389add.service: Deactivated successfully.
Nov 29 02:41:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:10.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:10.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:12.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:12.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:14 np0005539552 python3.9[218438]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:41:14 np0005539552 systemd[1]: Stopping Open-iSCSI...
Nov 29 02:41:14 np0005539552 iscsid[206007]: iscsid shutting down.
Nov 29 02:41:14 np0005539552 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 02:41:14 np0005539552 systemd[1]: Stopped Open-iSCSI.
Nov 29 02:41:14 np0005539552 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:41:14 np0005539552 systemd[1]: Starting Open-iSCSI...
Nov 29 02:41:14 np0005539552 systemd[1]: Started Open-iSCSI.
Nov 29 02:41:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:14.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:14.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:15 np0005539552 podman[218644]: 2025-11-29 07:41:15.386565313 +0000 UTC m=+0.065518048 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 02:41:15 np0005539552 python3.9[218594]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:41:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:16.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:16 np0005539552 python3.9[218821]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539552 python3.9[218973]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:41:18 np0005539552 systemd[1]: Reloading.
Nov 29 02:41:18 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:18 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:18.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:19 np0005539552 python3.9[219158]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:41:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:19 np0005539552 network[219175]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:41:19 np0005539552 network[219176]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:41:19 np0005539552 network[219177]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:41:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:41:20.589 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:41:20.590 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:41:20.590 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:22.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:22.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:24.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:25 np0005539552 python3.9[219455]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:26 np0005539552 python3.9[219608]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:26.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:26.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539552 python3.9[219762]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:28.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:28.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539552 python3.9[219916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:29 np0005539552 python3.9[220069]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:30 np0005539552 python3.9[220222]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:30.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:31 np0005539552 python3.9[220376]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:32 np0005539552 python3.9[220529]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:32.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:33 np0005539552 podman[220655]: 2025-11-29 07:41:33.320026212 +0000 UTC m=+0.060846144 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 02:41:33 np0005539552 python3.9[220700]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539552 python3.9[220854]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:34.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539552 python3.9[221007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:34.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:35 np0005539552 python3.9[221159]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539552 python3.9[221361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539552 python3.9[221513]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:36.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:37 np0005539552 python3.9[221666]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:37 np0005539552 podman[221790]: 2025-11-29 07:41:37.812094071 +0000 UTC m=+0.084298368 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:41:38 np0005539552 python3.9[221841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:38.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:38.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:38 np0005539552 python3.9[221997]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:39 np0005539552 python3.9[222149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:40 np0005539552 python3.9[222301]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:40.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:40.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:40 np0005539552 python3.9[222454]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:41 np0005539552 python3.9[222606]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539552 python3.9[222758]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:42.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:42 np0005539552 python3.9[222910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:41:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:42.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:41:43 np0005539552 python3.9[223063]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:44 np0005539552 python3.9[223215]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:44.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:44.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:45 np0005539552 python3.9[223368]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:41:45 np0005539552 podman[223393]: 2025-11-29 07:41:45.991366683 +0000 UTC m=+0.078063793 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:41:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:46.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:46 np0005539552 python3.9[223541]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:41:46 np0005539552 systemd[1]: Reloading.
Nov 29 02:41:46 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:46 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539552 python3.9[223729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:48.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:48.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539552 python3.9[223883]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:49 np0005539552 python3.9[224036]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:50 np0005539552 python3.9[224189]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:50.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:50.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:51 np0005539552 python3.9[224343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:51 np0005539552 python3.9[224496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:52 np0005539552 python3.9[224649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:52.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:52.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:52 np0005539552 python3.9[224803]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:54.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:55 np0005539552 python3.9[224957]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539552 python3.9[225159]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:56.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:56 np0005539552 python3.9[225312]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:56.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:57 np0005539552 python3.9[225464]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:58 np0005539552 python3.9[225616]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:58.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:41:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:41:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:41:59 np0005539552 python3.9[225769]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:59 np0005539552 python3.9[225921]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:00 np0005539552 python3.9[226073]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:01 np0005539552 python3.9[226226]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:01 np0005539552 python3.9[226378]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:02.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:04 np0005539552 podman[226535]: 2025-11-29 07:42:04.008731091 +0000 UTC m=+0.100367177 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 02:42:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:04.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:04.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:06.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:07 np0005539552 podman[226556]: 2025-11-29 07:42:07.999056624 +0000 UTC m=+0.084942426 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 02:42:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:08.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:08.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:09 np0005539552 python3.9[226711]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 02:42:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:10 np0005539552 python3.9[226864]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:42:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:10.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:10.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:12.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:12.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539552 python3.9[227024]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:42:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:14.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:16.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:16 np0005539552 podman[227159]: 2025-11-29 07:42:16.987046073 +0000 UTC m=+0.078885974 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 02:42:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:18.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:42:20.590 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:42:20.591 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:42:20.591 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:20.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:20.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:21 np0005539552 systemd-logind[788]: New session 52 of user zuul.
Nov 29 02:42:21 np0005539552 systemd[1]: Started Session 52 of User zuul.
Nov 29 02:42:21 np0005539552 systemd[1]: session-52.scope: Deactivated successfully.
Nov 29 02:42:21 np0005539552 systemd-logind[788]: Session 52 logged out. Waiting for processes to exit.
Nov 29 02:42:21 np0005539552 systemd-logind[788]: Removed session 52.
Nov 29 02:42:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:22 np0005539552 python3.9[227335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:22.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:23 np0005539552 python3.9[227456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402142.259421-3441-249753038777203/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:24 np0005539552 python3.9[227606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:24 np0005539552 python3.9[227682]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:24.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:24.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:25 np0005539552 python3.9[227833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:25 np0005539552 python3.9[227954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402144.5745661-3441-25824569671573/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:26 np0005539552 python3.9[228104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:26.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:26 np0005539552 python3.9[228226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402145.7883646-3441-213190059353993/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:26.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:27 np0005539552 python3.9[228376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:28 np0005539552 python3.9[228497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402146.9960759-3441-138503006532093/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:28.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:28 np0005539552 python3.9[228648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:28.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:29 np0005539552 python3.9[228769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402148.2470121-3441-214273103570421/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:30.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:31 np0005539552 python3.9[228922]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:42:32 np0005539552 python3.9[229074]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:42:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:32.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:32 np0005539552 python3.9[229227]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:42:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:32.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:33 np0005539552 python3.9[229379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:34 np0005539552 python3.9[229502]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764402153.0843787-3763-188713733882939/.source _original_basename=.c6zp3zbg follow=False checksum=682ab8de996278c0d4a277749133186af9a5837e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 02:42:34 np0005539552 podman[229503]: 2025-11-29 07:42:34.157894342 +0000 UTC m=+0.076035888 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:42:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:34.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:35 np0005539552 python3.9[229674]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:42:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:36 np0005539552 python3.9[229876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:36.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:36 np0005539552 python3.9[229998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402155.7395275-3840-182110677668479/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:36.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:37 np0005539552 python3.9[230148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:38 np0005539552 python3.9[230269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402157.0441194-3885-40037774303293/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:38 np0005539552 podman[230270]: 2025-11-29 07:42:38.193838682 +0000 UTC m=+0.083357393 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 02:42:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:38.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:39 np0005539552 python3.9[230448]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 02:42:39 np0005539552 python3.9[230600]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:42:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:40.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:40.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:40 np0005539552 python3[230753]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:42:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:42.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:44.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:46.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:46.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:48.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:49 np0005539552 podman[230827]: 2025-11-29 07:42:49.90448047 +0000 UTC m=+1.995126582 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:42:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:50.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:52 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:42:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:52.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:56 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:42:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:42:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:58.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:00 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:43:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:02.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:04 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:43:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:04.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:04.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:05 np0005539552 podman[230951]: 2025-11-29 07:43:05.429464786 +0000 UTC m=+0.524143695 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).paxos(paxos updating c 1005..1708) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.520837188s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:43:05 np0005539552 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2[77117]: 2025-11-29T07:43:05.631+0000 7f88aff6a640 -1 mon.compute-2@1(peon).paxos(paxos updating c 1005..1708) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.520837188s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Nov 29 02:43:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:06.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:07 np0005539552 podman[230766]: 2025-11-29 07:43:07.234740455 +0000 UTC m=+26.195376102 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:07 np0005539552 podman[230995]: 2025-11-29 07:43:07.37068984 +0000 UTC m=+0.025971415 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:08 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:43:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:08.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:08 np0005539552 podman[231009]: 2025-11-29 07:43:08.996806359 +0000 UTC m=+0.089683389 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:43:09 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:43:09 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(65) init, last seen epoch 65, mid-election, bumping
Nov 29 02:43:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:10.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:12 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:43:12 np0005539552 podman[230995]: 2025-11-29 07:43:12.697924389 +0000 UTC m=+5.353205944 container create 3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:12 np0005539552 python3[230753]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 02:43:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:12.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:12.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:14.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: MDS daemon mds.cephfs.compute-1.ldsugj is removed because it is dead or otherwise unavailable.
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: MDS daemon mds.cephfs.compute-2.mmoati is removed because it is dead or otherwise unavailable.
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:16 np0005539552 ceph-mon[77121]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:16 np0005539552 ceph-mon[77121]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:43:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:16.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:16.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:18.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:43:20.592 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:43:20.592 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:43:20.593 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:21 np0005539552 ceph-mon[77121]: Health check failed: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:22 np0005539552 ceph-mon[77121]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 29 02:43:22 np0005539552 ceph-mon[77121]: paxos.1).electionLogic(69) init, last seen epoch 69, mid-election, bumping
Nov 29 02:43:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:22.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:24.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:26.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:28 np0005539552 python3.9[231401]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:28.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:29 np0005539552 python3.9[231556]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 02:43:29 np0005539552 ceph-mon[77121]: mon.compute-2 calling monitor election
Nov 29 02:43:29 np0005539552 ceph-mon[77121]: mon.compute-0 calling monitor election
Nov 29 02:43:29 np0005539552 ceph-mon[77121]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:43:29 np0005539552 ceph-mon[77121]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:43:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:32 np0005539552 python3.9[231709]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:43:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:32.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:33 np0005539552 python3[231862]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:43:34 np0005539552 podman[231898]: 2025-11-29 07:43:33.934104005 +0000 UTC m=+0.023841038 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:34 np0005539552 podman[231898]: 2025-11-29 07:43:34.258419387 +0000 UTC m=+0.348156420 container create a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:34 np0005539552 python3[231862]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 02:43:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:34.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:35 np0005539552 podman[231964]: 2025-11-29 07:43:35.971363326 +0000 UTC m=+0.057284233 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:43:35 np0005539552 podman[231963]: 2025-11-29 07:43:35.973337519 +0000 UTC m=+0.059181594 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 29 02:43:36 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 02:43:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:36.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:36.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:38.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:38 np0005539552 ceph-mon[77121]: mon.compute-1 calling monitor election
Nov 29 02:43:38 np0005539552 ceph-mon[77121]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:38 np0005539552 ceph-mon[77121]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:38.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:39 np0005539552 python3.9[232130]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:40 np0005539552 podman[232157]: 2025-11-29 07:43:40.013455503 +0000 UTC m=+0.098244078 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:43:40 np0005539552 python3.9[232312]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:43:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:40.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:40.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:41 np0005539552 python3.9[232514]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402220.508952-4160-58285253743225/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:43:41 np0005539552 python3.9[232590]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:43:41 np0005539552 systemd[1]: Reloading.
Nov 29 02:43:41 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:43:41 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:43:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:42.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:42.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:43 np0005539552 systemd[1]: Starting dnf makecache...
Nov 29 02:43:43 np0005539552 dnf[232626]: Metadata cache refreshed recently.
Nov 29 02:43:43 np0005539552 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 02:43:43 np0005539552 systemd[1]: Finished dnf makecache.
Nov 29 02:43:43 np0005539552 python3.9[232702]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:43:43 np0005539552 systemd[1]: Reloading.
Nov 29 02:43:43 np0005539552 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:43:43 np0005539552 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:43:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:44 np0005539552 systemd[1]: Starting nova_compute container...
Nov 29 02:43:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:44.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:45 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:43:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:45 np0005539552 podman[232742]: 2025-11-29 07:43:45.381352189 +0000 UTC m=+1.142886160 container init a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:45 np0005539552 podman[232742]: 2025-11-29 07:43:45.388304245 +0000 UTC m=+1.149838196 container start a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + sudo -E kolla_set_configs
Nov 29 02:43:45 np0005539552 podman[232742]: nova_compute
Nov 29 02:43:45 np0005539552 systemd[1]: Started nova_compute container.
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Validating config file
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying service configuration files
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Writing out command to execute
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:45 np0005539552 nova_compute[232758]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:45 np0005539552 nova_compute[232758]: ++ cat /run_command
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + CMD=nova-compute
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + ARGS=
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + sudo kolla_copy_cacerts
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + [[ ! -n '' ]]
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + . kolla_extend_start
Nov 29 02:43:45 np0005539552 nova_compute[232758]: Running command: 'nova-compute'
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + umask 0022
Nov 29 02:43:45 np0005539552 nova_compute[232758]: + exec nova-compute
Nov 29 02:43:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:46.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.746 232762 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.746 232762 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.747 232762 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.747 232762 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.890 232762 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.914 232762 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:47 np0005539552 nova_compute[232758]: 2025-11-29 07:43:47.914 232762 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:43:47 np0005539552 ceph-mon[77121]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops)
Nov 29 02:43:47 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:43:47 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:48.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:48 np0005539552 python3.9[232926]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:48.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.410 232762 INFO nova.virt.driver [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.600 232762 INFO nova.compute.provider_config [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.683 232762 DEBUG oslo_concurrency.lockutils [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.684 232762 DEBUG oslo_concurrency.lockutils [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.684 232762 DEBUG oslo_concurrency.lockutils [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.684 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.685 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.685 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.685 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.685 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.685 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.685 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.686 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.686 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.686 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.686 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.686 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.686 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.687 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.687 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.687 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.687 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.687 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.687 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.688 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.688 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.688 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.688 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.688 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.689 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.689 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.689 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.689 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.689 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.689 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.690 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.691 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.691 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.691 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.691 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.691 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.692 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.692 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.692 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.692 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.692 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.692 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.693 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.693 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.693 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.693 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.693 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.694 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.694 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.694 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.694 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.694 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.694 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.695 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.696 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.697 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.698 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.699 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.700 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.700 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.700 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.700 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.700 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.700 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.701 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.702 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.702 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.702 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.702 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.702 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.702 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.703 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.704 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.705 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.706 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.707 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.707 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.707 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.707 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.707 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.707 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.708 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.709 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.710 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.711 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.712 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.713 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.714 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.715 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.716 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.717 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.718 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.718 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.718 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.718 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.718 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.718 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.719 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.720 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.720 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.720 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.720 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.720 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.720 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.721 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.722 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.723 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.724 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.725 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.726 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.727 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.728 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.729 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.730 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.731 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.732 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.733 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.734 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.735 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.736 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.737 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.737 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.737 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.737 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.737 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.738 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.738 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.738 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.738 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.738 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.738 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.739 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.740 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.741 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.741 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.741 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.741 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.742 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.742 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.742 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.742 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.742 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.742 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.743 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.743 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.743 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.743 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.744 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.744 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.744 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.744 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.744 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.745 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.745 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.745 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.745 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.745 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.746 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.746 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.746 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.746 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.747 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.747 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.747 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.747 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.747 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.748 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.748 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.748 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.748 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.748 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.748 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.749 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.749 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.749 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.749 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.749 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.750 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.750 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.750 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.750 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.750 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.751 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.751 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.751 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.751 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.751 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.752 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.752 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.752 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.752 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.752 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.753 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.753 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.753 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.753 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.753 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.754 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.754 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.754 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.754 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.754 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.754 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.755 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.755 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.755 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.755 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.756 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.756 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.756 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.756 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.756 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.757 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.757 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.757 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.757 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.757 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.757 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.758 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.758 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.758 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.758 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.758 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.759 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.759 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.759 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.759 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.759 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.760 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.760 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.760 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.760 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.760 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.761 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.761 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.761 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.761 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.761 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.762 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.762 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.762 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.762 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.762 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.763 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.763 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.763 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.763 232762 WARNING oslo_config.cfg [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:43:49 np0005539552 nova_compute[232758]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:43:49 np0005539552 nova_compute[232758]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:43:49 np0005539552 nova_compute[232758]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:43:49 np0005539552 nova_compute[232758]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.764 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.764 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.764 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.764 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.764 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.765 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.765 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.765 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.765 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.766 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.766 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.766 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.766 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.766 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.767 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.767 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.767 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.767 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.767 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rbd_secret_uuid        = b66774a7-56d9-5535-bd8c-681234404870 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.768 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.768 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.768 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.768 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.768 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.769 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.769 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.769 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.769 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.769 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.770 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.770 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.770 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.770 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.770 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.771 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.771 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.771 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.771 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.772 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.772 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.772 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.772 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.772 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.772 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.773 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.773 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.773 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.773 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.773 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.774 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.774 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.774 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.774 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.774 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.775 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.775 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.775 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.775 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.775 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.776 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.777 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.778 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.779 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.779 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.779 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.779 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.779 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.779 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.780 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.780 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.780 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.780 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.780 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.781 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.781 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.781 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.781 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.782 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.782 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.782 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.782 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.782 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.783 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.783 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.783 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.783 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.783 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.784 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.784 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.784 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.784 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.784 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.785 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.785 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.785 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.785 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.785 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.786 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.786 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.786 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.786 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.786 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.787 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.787 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.787 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.787 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.787 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.788 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.788 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.788 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.788 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.788 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.789 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.789 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.789 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.789 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.789 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.790 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.790 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.790 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.790 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.790 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.791 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.791 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.791 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.791 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.791 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.792 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.792 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.792 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.792 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.792 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.793 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.793 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.793 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.793 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.793 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.794 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.794 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.794 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.794 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.794 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.795 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.795 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.795 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.795 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.795 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.796 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.796 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.796 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.796 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.796 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.797 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.797 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.797 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.797 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.797 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.798 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.798 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.798 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.798 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.798 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.799 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.799 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.799 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.799 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.800 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.800 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.800 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.800 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.800 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.800 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.801 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.801 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.801 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.801 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.801 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.802 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.802 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.802 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.802 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.803 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.803 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.803 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.803 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.803 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.804 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.804 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.804 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.804 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.804 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.805 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.805 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.805 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.805 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.805 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.806 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.806 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.806 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.806 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.806 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.806 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.807 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.807 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.807 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.807 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.808 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.808 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.808 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.808 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.808 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.808 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.809 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.809 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.809 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.809 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.809 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.810 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.810 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.810 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.810 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.810 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.811 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.811 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.811 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.811 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.811 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.812 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.812 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.812 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.812 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.813 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.813 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.813 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.813 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.813 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.814 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.814 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.814 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.814 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.814 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.815 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.815 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.815 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.815 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.815 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.815 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.816 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.816 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.816 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.816 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.816 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.817 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.817 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.817 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.817 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.817 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.818 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.818 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.818 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.818 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.818 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.819 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.819 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.819 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.819 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.819 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.820 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.820 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.820 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.820 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.820 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.821 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.821 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.821 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.821 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.821 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.822 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.822 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.822 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.822 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.822 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.823 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.823 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.823 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.823 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.823 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.824 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.824 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.824 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.824 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.824 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.825 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.825 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.825 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.825 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.825 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.826 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.826 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.826 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.826 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.826 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.827 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.827 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.827 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.827 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.827 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.828 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.828 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.828 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.828 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.828 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.829 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.829 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.829 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.829 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.829 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.830 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.830 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.830 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.830 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.830 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.831 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.831 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.831 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.831 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.831 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.832 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.832 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.832 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.832 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.832 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.833 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.833 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.833 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.833 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.833 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.834 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.834 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.834 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.834 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.834 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.835 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.835 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.835 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.835 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.835 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.836 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.836 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.836 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.836 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.836 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.837 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.837 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.837 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.837 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.837 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.837 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.838 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.838 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.838 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.838 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.838 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.839 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.839 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.839 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.839 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.839 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.840 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.840 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.840 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.840 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.840 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.841 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.841 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.841 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.841 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.841 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.842 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.842 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.842 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.842 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.842 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.843 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.843 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.843 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.843 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.843 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.844 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.844 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.844 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.844 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.844 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.845 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.845 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.845 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.845 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.845 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.846 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.846 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.846 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.846 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.846 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.846 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.847 232762 DEBUG oslo_service.service [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:43:49 np0005539552 nova_compute[232758]: 2025-11-29 07:43:49.848 232762 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:43:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.055 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.056 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.057 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.057 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:43:50 np0005539552 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:43:50 np0005539552 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.136 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f93f705c5e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.138 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f93f705c5e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.140 232762 INFO nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:43:50 np0005539552 python3.9[233097]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.268 232762 WARNING nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.268 232762 DEBUG nova.virt.libvirt.volume.mount [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:43:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.994 232762 INFO nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 
Nov 29 02:43:50 np0005539552 nova_compute[232758]:  <host>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <uuid>6fbde64a-d978-4f1a-a29d-a77e1f5a1987</uuid>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <cpu>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <arch>x86_64</arch>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <model>EPYC-Rome-v4</model>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <vendor>AMD</vendor>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <microcode version='16777317'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='x2apic'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='tsc-deadline'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='osxsave'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='hypervisor'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='tsc_adjust'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='spec-ctrl'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='stibp'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='arch-capabilities'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='ssbd'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='cmp_legacy'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='topoext'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='virt-ssbd'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='lbrv'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='tsc-scale'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='vmcb-clean'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='pause-filter'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='pfthreshold'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='svme-addr-chk'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='rdctl-no'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='mds-no'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <feature name='pschange-mc-no'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <pages unit='KiB' size='4'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <pages unit='KiB' size='2048'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </cpu>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <power_management>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <suspend_mem/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </power_management>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <iommu support='no'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <migration_features>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <live/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <uri_transports>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:        <uri_transport>tcp</uri_transport>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:        <uri_transport>rdma</uri_transport>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      </uri_transports>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </migration_features>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <topology>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <cells num='1'>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:        <cell id='0'>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          <memory unit='KiB'>7864316</memory>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          <distances>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <sibling id='0' value='10'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          </distances>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          <cpus num='8'>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:          </cpus>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:        </cell>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      </cells>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </topology>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <cache>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </cache>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <secmodel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <model>selinux</model>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <doi>0</doi>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </secmodel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <secmodel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <model>dac</model>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <doi>0</doi>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    </secmodel>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:  </host>
Nov 29 02:43:50 np0005539552 nova_compute[232758]: 
Nov 29 02:43:50 np0005539552 nova_compute[232758]:  <guest>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <os_type>hvm</os_type>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:    <arch name='i686'>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <wordsize>32</wordsize>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:50 np0005539552 nova_compute[232758]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <domain type='qemu'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <domain type='kvm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </arch>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <pae/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <nonpae/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <apic default='on' toggle='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <cpuselection/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <deviceboot/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <externalSnapshot/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </guest>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <guest>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <os_type>hvm</os_type>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <arch name='x86_64'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <wordsize>64</wordsize>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <domain type='qemu'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <domain type='kvm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </arch>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <apic default='on' toggle='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <cpuselection/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <deviceboot/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <externalSnapshot/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </guest>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 
Nov 29 02:43:51 np0005539552 nova_compute[232758]: </capabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: #033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:50.999 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.014 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:43:51 np0005539552 nova_compute[232758]: <domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <domain>kvm</domain>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <arch>i686</arch>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <vcpu max='4096'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <iothreads supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <os supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='firmware'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <loader supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>rom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pflash</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='readonly'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>yes</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='secure'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </loader>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </os>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='maximumMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <vendor>AMD</vendor>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='succor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='custom' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-128'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-256'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-512'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <memoryBacking supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='sourceType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>anonymous</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>memfd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </memoryBacking>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <disk supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='diskDevice'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>disk</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cdrom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>floppy</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>lun</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>fdc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>sata</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </disk>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <graphics supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vnc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egl-headless</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </graphics>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <video supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='modelType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vga</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cirrus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>none</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>bochs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ramfb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </video>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hostdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='mode'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>subsystem</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='startupPolicy'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>mandatory</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>requisite</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>optional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='subsysType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pci</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='capsType'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='pciBackend'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hostdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <rng supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>random</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </rng>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <filesystem supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='driverType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>path</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>handle</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtiofs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </filesystem>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <tpm supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-tis</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-crb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emulator</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>external</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendVersion'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>2.0</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </tpm>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <redirdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </redirdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <channel supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </channel>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <crypto supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </crypto>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <interface supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>passt</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </interface>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <panic supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>isa</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>hyperv</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </panic>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <console supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>null</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dev</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pipe</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stdio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>udp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tcp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu-vdagent</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </console>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <gic supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <genid supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backup supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <async-teardown supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <ps2 supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sev supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sgx supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hyperv supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='features'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>relaxed</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vapic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>spinlocks</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vpindex</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>runtime</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>synic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stimer</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reset</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vendor_id</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>frequencies</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reenlightenment</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tlbflush</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ipi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>avic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emsr_bitmap</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>xmm_input</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hyperv>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <launchSecurity supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='sectype'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tdx</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </launchSecurity>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: </domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.019 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:43:51 np0005539552 nova_compute[232758]: <domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <domain>kvm</domain>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <arch>i686</arch>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <vcpu max='240'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <iothreads supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <os supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='firmware'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <loader supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>rom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pflash</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='readonly'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>yes</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='secure'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </loader>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </os>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='maximumMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <vendor>AMD</vendor>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='succor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='custom' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-128'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-256'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-512'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <memoryBacking supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='sourceType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>anonymous</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>memfd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </memoryBacking>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <disk supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='diskDevice'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>disk</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cdrom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>floppy</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>lun</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ide</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>fdc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>sata</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </disk>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <graphics supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vnc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egl-headless</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </graphics>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <video supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='modelType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vga</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cirrus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>none</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>bochs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ramfb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </video>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hostdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='mode'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>subsystem</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='startupPolicy'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>mandatory</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>requisite</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>optional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='subsysType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pci</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='capsType'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='pciBackend'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hostdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <rng supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>random</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </rng>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <filesystem supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='driverType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>path</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>handle</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtiofs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </filesystem>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <tpm supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-tis</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-crb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emulator</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>external</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendVersion'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>2.0</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </tpm>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <redirdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </redirdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <channel supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </channel>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <crypto supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </crypto>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <interface supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>passt</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </interface>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <panic supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>isa</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>hyperv</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </panic>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <console supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>null</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dev</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pipe</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stdio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>udp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tcp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu-vdagent</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </console>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <gic supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <genid supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backup supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <async-teardown supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <ps2 supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sev supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sgx supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hyperv supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='features'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>relaxed</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vapic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>spinlocks</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vpindex</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>runtime</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>synic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stimer</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reset</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vendor_id</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>frequencies</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reenlightenment</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tlbflush</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ipi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>avic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emsr_bitmap</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>xmm_input</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hyperv>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <launchSecurity supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='sectype'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tdx</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </launchSecurity>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: </domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.046 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.050 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:43:51 np0005539552 nova_compute[232758]: <domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <domain>kvm</domain>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <arch>x86_64</arch>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <vcpu max='4096'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <iothreads supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <os supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='firmware'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>efi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <loader supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>rom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pflash</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='readonly'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>yes</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='secure'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>yes</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </loader>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </os>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='maximumMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <vendor>AMD</vendor>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='succor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='custom' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-128'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-256'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-512'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <memoryBacking supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='sourceType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>anonymous</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>memfd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </memoryBacking>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <disk supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='diskDevice'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>disk</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cdrom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>floppy</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>lun</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>fdc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>sata</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </disk>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <graphics supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vnc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egl-headless</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </graphics>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <video supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='modelType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vga</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cirrus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>none</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>bochs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ramfb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </video>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hostdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='mode'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>subsystem</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='startupPolicy'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>mandatory</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>requisite</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>optional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='subsysType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pci</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='capsType'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='pciBackend'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hostdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <rng supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>random</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </rng>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <filesystem supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='driverType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>path</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>handle</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtiofs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </filesystem>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <tpm supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-tis</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-crb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emulator</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>external</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendVersion'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>2.0</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </tpm>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <redirdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </redirdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <channel supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </channel>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <crypto supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </crypto>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <interface supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>passt</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </interface>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <panic supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>isa</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>hyperv</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </panic>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <console supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>null</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dev</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pipe</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stdio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>udp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tcp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu-vdagent</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </console>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <gic supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <genid supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backup supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <async-teardown supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <ps2 supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sev supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sgx supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hyperv supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='features'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>relaxed</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vapic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>spinlocks</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vpindex</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>runtime</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>synic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stimer</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reset</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vendor_id</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>frequencies</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reenlightenment</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tlbflush</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ipi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>avic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emsr_bitmap</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>xmm_input</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hyperv>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <launchSecurity supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='sectype'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tdx</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </launchSecurity>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: </domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.129 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:43:51 np0005539552 nova_compute[232758]: <domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <domain>kvm</domain>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <arch>x86_64</arch>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <vcpu max='240'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <iothreads supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <os supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='firmware'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <loader supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>rom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pflash</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='readonly'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>yes</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='secure'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>no</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </loader>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </os>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='maximumMigratable'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>on</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>off</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <vendor>AMD</vendor>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='succor'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <mode name='custom' supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Denverton-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='auto-ibrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amd-psfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='stibp-always-on'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='EPYC-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-128'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-256'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx10-512'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='prefetchiti'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Haswell-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512er'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512pf'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fma4'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tbm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xop'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='amx-tile'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-bf16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-fp16'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bitalg'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrc'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fzrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='la57'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='taa-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xfd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ifma'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cmpccxadd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fbsdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='fsrs'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ibrs-all'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mcdt-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pbrsb-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='psdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='serialize'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vaes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='hle'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='rtm'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512bw'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512cd'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512dq'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512f'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='avx512vl'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='invpcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pcid'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='pku'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='mpx'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='core-capability'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='split-lock-detect'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='cldemote'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='erms'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='gfni'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdir64b'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='movdiri'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='xsaves'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='athlon-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='core2duo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='coreduo-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='n270-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='ss'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <blockers model='phenom-v1'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnow'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <feature name='3dnowext'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </blockers>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </mode>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <memoryBacking supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <enum name='sourceType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>anonymous</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <value>memfd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </memoryBacking>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <disk supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='diskDevice'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>disk</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cdrom</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>floppy</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>lun</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ide</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>fdc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>sata</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </disk>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <graphics supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vnc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egl-headless</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </graphics>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <video supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='modelType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vga</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>cirrus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>none</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>bochs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ramfb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </video>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hostdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='mode'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>subsystem</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='startupPolicy'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>mandatory</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>requisite</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>optional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='subsysType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pci</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>scsi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='capsType'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='pciBackend'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hostdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <rng supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtio-non-transitional</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>random</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>egd</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </rng>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <filesystem supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='driverType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>path</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>handle</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>virtiofs</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </filesystem>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <tpm supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-tis</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tpm-crb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emulator</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>external</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendVersion'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>2.0</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </tpm>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <redirdev supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='bus'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>usb</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </redirdev>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <channel supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </channel>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <crypto supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendModel'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>builtin</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </crypto>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <interface supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='backendType'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>default</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>passt</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </interface>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <panic supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='model'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>isa</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>hyperv</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </panic>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <console supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='type'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>null</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vc</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pty</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dev</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>file</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>pipe</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stdio</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>udp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tcp</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>unix</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>qemu-vdagent</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>dbus</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </console>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </devices>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <gic supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <genid supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <backup supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <async-teardown supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <ps2 supported='yes'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sev supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <sgx supported='no'/>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <hyperv supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='features'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>relaxed</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vapic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>spinlocks</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vpindex</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>runtime</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>synic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>stimer</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reset</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>vendor_id</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>frequencies</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>reenlightenment</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tlbflush</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>ipi</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>avic</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>emsr_bitmap</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>xmm_input</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </defaults>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </hyperv>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    <launchSecurity supported='yes'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      <enum name='sectype'>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:        <value>tdx</value>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:      </enum>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:    </launchSecurity>
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  </features>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: </domainCapabilities>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.215 232762 DEBUG nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.216 232762 INFO nova.virt.libvirt.host [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.217 232762 INFO nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.218 232762 INFO nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.225 232762 DEBUG nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:43:51 np0005539552 nova_compute[232758]:  <model>Nehalem</model>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: </cpu>
Nov 29 02:43:51 np0005539552 nova_compute[232758]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.229 232762 DEBUG nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:43:51 np0005539552 python3.9[233291]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.583 232762 INFO nova.virt.node [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Determined node identity 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.605 232762 WARNING nova.compute.manager [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Compute nodes ['29c97280-aaf3-4c7f-a78a-1c9e8d025371'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:43:51 np0005539552 nova_compute[232758]: 2025-11-29 07:43:51.824 232762 INFO nova.compute.manager [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.018 232762 WARNING nova.compute.manager [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.018 232762 DEBUG oslo_concurrency.lockutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.018 232762 DEBUG oslo_concurrency.lockutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.019 232762 DEBUG oslo_concurrency.lockutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.019 232762 DEBUG nova.compute.resource_tracker [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.019 232762 DEBUG oslo_concurrency.processutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4041351653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.605 232762 DEBUG oslo_concurrency.processutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:52 np0005539552 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:43:52 np0005539552 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:43:52 np0005539552 python3.9[233463]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:43:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:52 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:43:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:52.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:52 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.905 232762 WARNING nova.virt.libvirt.driver [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.907 232762 DEBUG nova.compute.resource_tracker [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5300MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.907 232762 DEBUG oslo_concurrency.lockutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.908 232762 DEBUG oslo_concurrency.lockutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:52 np0005539552 nova_compute[232758]: 2025-11-29 07:43:52.926 232762 WARNING nova.compute.resource_tracker [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] No compute node record for compute-2.ctlplane.example.com:29c97280-aaf3-4c7f-a78a-1c9e8d025371: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 29c97280-aaf3-4c7f-a78a-1c9e8d025371 could not be found.#033[00m
Nov 29 02:43:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:53 np0005539552 nova_compute[232758]: 2025-11-29 07:43:53.655 232762 INFO nova.compute.resource_tracker [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: 29c97280-aaf3-4c7f-a78a-1c9e8d025371#033[00m
Nov 29 02:43:53 np0005539552 ceph-mon[77121]: Health check failed: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:43:53 np0005539552 python3.9[233663]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:43:53 np0005539552 systemd[1]: Stopping nova_compute container...
Nov 29 02:43:53 np0005539552 nova_compute[232758]: 2025-11-29 07:43:53.949 232762 DEBUG oslo_concurrency.lockutils [None req-45d57452-898d-4537-909b-e06bec8d5b15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:53 np0005539552 nova_compute[232758]: 2025-11-29 07:43:53.950 232762 DEBUG oslo_concurrency.lockutils [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:53 np0005539552 nova_compute[232758]: 2025-11-29 07:43:53.950 232762 DEBUG oslo_concurrency.lockutils [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:53 np0005539552 nova_compute[232758]: 2025-11-29 07:43:53.950 232762 DEBUG oslo_concurrency.lockutils [None req-31ac3c66-cb5c-422d-9e55-be6e798b70ee - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:54 np0005539552 virtqemud[233098]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 02:43:54 np0005539552 virtqemud[233098]: hostname: compute-2
Nov 29 02:43:54 np0005539552 virtqemud[233098]: End of file while reading data: Input/output error
Nov 29 02:43:54 np0005539552 systemd[1]: libpod-a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4.scope: Deactivated successfully.
Nov 29 02:43:54 np0005539552 systemd[1]: libpod-a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4.scope: Consumed 3.825s CPU time.
Nov 29 02:43:54 np0005539552 podman[233667]: 2025-11-29 07:43:54.403027058 +0000 UTC m=+0.608152791 container died a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:43:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:55 np0005539552 systemd[1]: var-lib-containers-storage-overlay-ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c-merged.mount: Deactivated successfully.
Nov 29 02:43:55 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4-userdata-shm.mount: Deactivated successfully.
Nov 29 02:43:55 np0005539552 podman[233667]: 2025-11-29 07:43:55.203025708 +0000 UTC m=+1.408151431 container cleanup a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:43:55 np0005539552 podman[233667]: nova_compute
Nov 29 02:43:55 np0005539552 podman[233696]: nova_compute
Nov 29 02:43:55 np0005539552 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 02:43:55 np0005539552 systemd[1]: Stopped nova_compute container.
Nov 29 02:43:55 np0005539552 systemd[1]: Starting nova_compute container...
Nov 29 02:43:55 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:43:55 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced3424aad661c724e086ab13fd5a5dba34f32f4770f441f13c770702bbffd1c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539552 podman[233709]: 2025-11-29 07:43:55.603407134 +0000 UTC m=+0.312073945 container init a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:43:55 np0005539552 podman[233709]: 2025-11-29 07:43:55.614923242 +0000 UTC m=+0.323590023 container start a5540344be90b83c758ee69b8c561f987d0e23700b655a919557384b5205c0e4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + sudo -E kolla_set_configs
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Validating config file
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying service configuration files
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Writing out command to execute
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539552 nova_compute[233724]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539552 nova_compute[233724]: ++ cat /run_command
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + CMD=nova-compute
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + ARGS=
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + sudo kolla_copy_cacerts
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + [[ ! -n '' ]]
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + . kolla_extend_start
Nov 29 02:43:55 np0005539552 nova_compute[233724]: Running command: 'nova-compute'
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + umask 0022
Nov 29 02:43:55 np0005539552 nova_compute[233724]: + exec nova-compute
Nov 29 02:43:55 np0005539552 podman[233709]: nova_compute
Nov 29 02:43:55 np0005539552 systemd[1]: Started nova_compute container.
Nov 29 02:43:56 np0005539552 ceph-mon[77121]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops)
Nov 29 02:43:56 np0005539552 ceph-mon[77121]: Cluster is now healthy
Nov 29 02:43:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:43:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.736 233728 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.736 233728 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.737 233728 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.737 233728 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.883 233728 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.913 233728 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:57 np0005539552 nova_compute[233724]: 2025-11-29 07:43:57.914 233728 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:43:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:58 np0005539552 nova_compute[233724]: 2025-11-29 07:43:58.885 233728 INFO nova.virt.driver [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:43:58 np0005539552 python3.9[233894]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:43:58 np0005539552 nova_compute[233724]: 2025-11-29 07:43:58.975 233728 INFO nova.compute.provider_config [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:43:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:43:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.192 233728 DEBUG oslo_concurrency.lockutils [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.192 233728 DEBUG oslo_concurrency.lockutils [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.193 233728 DEBUG oslo_concurrency.lockutils [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.193 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.193 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.193 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.193 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.194 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.195 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.195 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.195 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.195 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.195 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.195 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.196 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.197 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.197 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.197 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.197 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.197 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.198 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.198 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.198 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.198 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.198 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.198 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.199 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.199 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.199 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.199 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.199 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.199 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.200 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.201 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.201 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.201 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.201 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.201 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.202 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.202 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.202 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.203 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.204 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.205 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.205 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.205 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.205 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.205 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.205 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.206 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.206 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.206 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.206 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.206 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.206 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.207 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.207 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.207 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.207 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.207 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.208 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.209 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.210 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.211 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.212 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.213 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.213 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.213 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.213 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.213 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.213 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.214 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.215 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.216 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.217 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.218 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.219 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.219 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.219 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.219 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.219 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.219 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.220 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.221 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.222 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.222 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.222 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.222 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.222 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.222 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.223 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.223 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.223 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.223 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.223 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.223 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.224 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.224 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.224 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.224 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.224 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.224 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.225 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.225 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.225 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.225 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.225 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.226 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.226 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.226 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.226 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.226 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.226 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.227 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.227 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.227 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.227 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.227 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.228 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.228 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.228 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.228 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.228 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.229 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.229 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.229 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.229 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.229 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.230 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.231 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.231 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.231 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.231 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.231 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.231 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.232 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.233 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.234 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.234 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.234 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.234 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.234 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 systemd[1]: Started libpod-conmon-3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be.scope.
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.234 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.235 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.235 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.235 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.235 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.235 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.236 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.237 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.238 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.238 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.238 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.238 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.238 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.238 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.239 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.239 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.239 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.239 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.239 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.240 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.240 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.240 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.240 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.240 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.241 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.241 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.241 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.241 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.241 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.242 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.242 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.242 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.242 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.242 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.242 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.243 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.243 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.243 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.243 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.243 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.244 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.244 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.244 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.244 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.244 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.245 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.245 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.245 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.245 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.245 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.246 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.246 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.246 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.246 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.246 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.247 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.247 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.247 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.247 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.247 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.248 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.248 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.248 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.248 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.248 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.249 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.249 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.249 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.249 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.249 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.250 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.250 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.250 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.250 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.250 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.251 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.251 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.251 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.251 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.251 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.251 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.252 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.252 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.252 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.252 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.253 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.253 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.253 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.253 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.253 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.253 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.254 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.254 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.254 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.254 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.254 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.254 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.255 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.255 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.255 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.255 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.255 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.255 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.256 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.256 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.256 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.256 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.256 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.257 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.257 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.257 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.257 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.257 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.257 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.258 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.258 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.258 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.258 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.258 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.258 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.259 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.259 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.259 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.259 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.259 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.260 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.260 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.260 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.260 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.260 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.260 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.261 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.261 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.261 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.261 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.261 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.261 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.262 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.262 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.262 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.262 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.262 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.262 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.263 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.263 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.263 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.263 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.263 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.263 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.264 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.264 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.264 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.264 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.264 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.264 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.265 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.265 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.265 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.265 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.265 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.265 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.266 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.267 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.268 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.268 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.268 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.268 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.268 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.268 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.269 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.270 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.271 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.271 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.271 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.271 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.271 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.271 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.272 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.273 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.274 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.274 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.274 233728 WARNING oslo_config.cfg [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:43:59 np0005539552 nova_compute[233724]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:43:59 np0005539552 nova_compute[233724]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:43:59 np0005539552 nova_compute[233724]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:43:59 np0005539552 nova_compute[233724]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.274 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.274 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.274 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.275 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.276 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.276 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.276 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.276 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.276 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.276 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.277 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.277 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.277 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rbd_secret_uuid        = b66774a7-56d9-5535-bd8c-681234404870 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.277 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.277 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.277 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.278 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.279 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.279 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.279 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.279 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.279 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.280 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.280 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.280 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.280 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.280 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.280 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.281 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.282 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.282 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.282 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ff346d6d8892709f50433c6dad48dd3347d3855054c19ffd0a39bc07f67d8/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.282 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.282 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.282 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.283 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.283 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.283 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ff346d6d8892709f50433c6dad48dd3347d3855054c19ffd0a39bc07f67d8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.283 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.283 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.283 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.284 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.284 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.284 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ff346d6d8892709f50433c6dad48dd3347d3855054c19ffd0a39bc07f67d8/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.284 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.284 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.284 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.285 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.286 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.286 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.286 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.286 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.286 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.286 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.287 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.288 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.288 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.288 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.288 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.288 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.288 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.289 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.290 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.290 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.290 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.290 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.290 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.290 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.291 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.291 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.291 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.291 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.291 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.291 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.292 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.293 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.294 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.295 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.296 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.296 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.296 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.296 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.296 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.297 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.298 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.299 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.300 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.301 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.302 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.302 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.302 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.302 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.302 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.302 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.303 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.304 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.304 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.304 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.304 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.304 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.304 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.305 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.306 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.306 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.306 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.306 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.306 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.306 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.307 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.307 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.307 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.307 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.307 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.307 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.308 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.309 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.310 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.311 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.312 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.312 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.312 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.312 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.312 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.312 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.313 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.313 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.313 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.313 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.313 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.313 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.314 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.315 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.316 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.317 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.318 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.319 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.320 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.321 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.322 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.323 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.324 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.325 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.326 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.327 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.328 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.329 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.330 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.330 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.330 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.330 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.330 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.330 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.331 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.331 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.331 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.331 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.331 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.331 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.332 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.333 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.334 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.334 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.334 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.334 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.334 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.334 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.335 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.335 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.335 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.335 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.335 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.335 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 podman[233921]: 2025-11-29 07:43:59.335861382 +0000 UTC m=+0.297657040 container init 3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.336 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.337 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.337 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.337 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.337 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.337 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.338 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.338 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.338 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.338 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.338 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.339 233728 DEBUG oslo_service.service [None req-c2c4d2d0-141e-4f14-84ac-6611881301a5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.340 233728 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:43:59 np0005539552 podman[233921]: 2025-11-29 07:43:59.343888076 +0000 UTC m=+0.305683734 container start 3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:43:59 np0005539552 python3.9[233894]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.374 233728 INFO nova.virt.node [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Determined node identity 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.375 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.375 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.376 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.376 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.387 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd8562ffc10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.390 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd8562ffc10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 02:43:59 np0005539552 nova_compute_init[233941]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.392 233728 INFO nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:43:59 np0005539552 systemd[1]: libpod-3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be.scope: Deactivated successfully.
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.399 233728 INFO nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <host>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <uuid>6fbde64a-d978-4f1a-a29d-a77e1f5a1987</uuid>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <arch>x86_64</arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <microcode version='16777317'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='x2apic'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='tsc-deadline'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='osxsave'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='hypervisor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='tsc_adjust'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='spec-ctrl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='stibp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='arch-capabilities'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='cmp_legacy'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='topoext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='virt-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='lbrv'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='tsc-scale'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='vmcb-clean'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='pause-filter'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='pfthreshold'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='rdctl-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='mds-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature name='pschange-mc-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <pages unit='KiB' size='4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <pages unit='KiB' size='2048'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <power_management>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <suspend_mem/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </power_management>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <iommu support='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <migration_features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <live/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <uri_transports>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <uri_transport>tcp</uri_transport>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <uri_transport>rdma</uri_transport>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </uri_transports>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </migration_features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <topology>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <cells num='1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <cell id='0'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          <memory unit='KiB'>7864316</memory>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          <distances>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <sibling id='0' value='10'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          </distances>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          <cpus num='8'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:          </cpus>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        </cell>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </cells>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </topology>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <cache>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </cache>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <secmodel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model>selinux</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <doi>0</doi>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </secmodel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <secmodel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model>dac</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <doi>0</doi>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </secmodel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </host>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <guest>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <os_type>hvm</os_type>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <arch name='i686'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <wordsize>32</wordsize>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <domain type='qemu'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <domain type='kvm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <pae/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <nonpae/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <apic default='on' toggle='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <cpuselection/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <deviceboot/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <externalSnapshot/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </guest>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <guest>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <os_type>hvm</os_type>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <arch name='x86_64'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <wordsize>64</wordsize>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <domain type='qemu'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <domain type='kvm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <apic default='on' toggle='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <cpuselection/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <deviceboot/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <externalSnapshot/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </guest>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 
Nov 29 02:43:59 np0005539552 nova_compute[233724]: </capabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: #033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.405 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:59 np0005539552 podman[233942]: 2025-11-29 07:43:59.407393774 +0000 UTC m=+0.025228395 container died 3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=nova_compute_init)
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.409 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:43:59 np0005539552 nova_compute[233724]: <domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <arch>i686</arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <vcpu max='4096'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <os supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='firmware'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>rom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pflash</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>yes</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='secure'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </loader>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>memfd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </memoryBacking>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>disk</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>floppy</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>lun</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>fdc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>sata</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vnc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <video supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vga</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>none</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>bochs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='mode'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>requisite</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>optional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pci</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hostdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>random</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>path</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>handle</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </filesystem>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emulator</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>external</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>2.0</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </tpm>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </redirdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </channel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </crypto>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>passt</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>isa</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </panic>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <console supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>null</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dev</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pipe</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stdio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>udp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tcp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </console>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='features'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vapic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>runtime</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>synic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stimer</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reset</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ipi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>avic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hyperv>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tdx</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </launchSecurity>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: </domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.414 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:43:59 np0005539552 nova_compute[233724]: <domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <arch>i686</arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <vcpu max='240'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <os supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='firmware'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>rom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pflash</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>yes</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='secure'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </loader>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>memfd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </memoryBacking>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>disk</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>floppy</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>lun</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ide</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>fdc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>sata</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vnc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <video supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vga</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>none</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>bochs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='mode'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>requisite</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>optional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pci</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hostdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>random</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>path</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>handle</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </filesystem>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emulator</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>external</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>2.0</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </tpm>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </redirdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </channel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </crypto>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>passt</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>isa</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </panic>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <console supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>null</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dev</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pipe</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stdio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>udp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tcp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </console>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='features'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vapic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>runtime</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>synic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stimer</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reset</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ipi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>avic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hyperv>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tdx</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </launchSecurity>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: </domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.443 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.447 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:43:59 np0005539552 nova_compute[233724]: <domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <arch>x86_64</arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <vcpu max='4096'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <os supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='firmware'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>efi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>rom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pflash</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>yes</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='secure'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>yes</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </loader>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>memfd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </memoryBacking>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>disk</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>floppy</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>lun</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>fdc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>sata</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vnc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <video supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vga</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>none</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>bochs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='mode'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>requisite</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>optional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pci</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hostdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>random</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>path</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>handle</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </filesystem>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emulator</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>external</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>2.0</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </tpm>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </redirdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </channel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </crypto>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>passt</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>isa</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </panic>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <console supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>null</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dev</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pipe</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stdio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>udp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tcp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </console>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='features'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vapic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>runtime</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>synic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stimer</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reset</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ipi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>avic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hyperv>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tdx</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </launchSecurity>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: </domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.511 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:43:59 np0005539552 nova_compute[233724]: <domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <arch>x86_64</arch>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <vcpu max='240'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <os supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='firmware'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>rom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pflash</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>yes</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='secure'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>no</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </loader>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>on</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>off</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </blockers>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </mode>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <value>memfd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </memoryBacking>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>disk</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>floppy</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>lun</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ide</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>fdc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>sata</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vnc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <video supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vga</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>none</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>bochs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='mode'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>requisite</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>optional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pci</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>scsi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hostdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>random</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>egd</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>path</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>handle</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </filesystem>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emulator</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>external</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>2.0</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </tpm>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='bus'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>usb</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </redirdev>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </channel>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>builtin</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </crypto>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>default</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>passt</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='model'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>isa</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </panic>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <console supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='type'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>null</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vc</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pty</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dev</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>file</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>pipe</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stdio</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>udp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tcp</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>unix</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>dbus</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </console>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='features'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vapic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>runtime</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>synic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>stimer</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reset</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>ipi</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>avic</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </defaults>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </hyperv>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:        <value>tdx</value>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:      </enum>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:    </launchSecurity>
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: </domainCapabilities>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.581 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.581 233728 INFO nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.583 233728 INFO nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.583 233728 INFO nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.591 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:43:59 np0005539552 nova_compute[233724]:  <model>Nehalem</model>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: </cpu>
Nov 29 02:43:59 np0005539552 nova_compute[233724]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.593 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.622 233728 DEBUG nova.virt.libvirt.volume.mount [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.777 233728 INFO nova.virt.node [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Determined node identity 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:59 np0005539552 nova_compute[233724]: 2025-11-29 07:43:59.859 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Verified node 29c97280-aaf3-4c7f-a78a-1c9e8d025371 matches my host compute-2.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 29 02:43:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be-userdata-shm.mount: Deactivated successfully.
Nov 29 02:43:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay-969ff346d6d8892709f50433c6dad48dd3347d3855054c19ffd0a39bc07f67d8-merged.mount: Deactivated successfully.
Nov 29 02:43:59 np0005539552 podman[233972]: 2025-11-29 07:43:59.962481086 +0000 UTC m=+0.545951309 container cleanup 3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:43:59 np0005539552 systemd[1]: libpod-conmon-3bf0ebc12ed7fe40e982148a01f95ee47e1630b089f8aeeaced87812617999be.scope: Deactivated successfully.
Nov 29 02:44:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.015 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.067 233728 ERROR nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Could not retrieve compute node resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '29c97280-aaf3-4c7f-a78a-1c9e8d025371' not found: No resource provider with uuid 29c97280-aaf3-4c7f-a78a-1c9e8d025371 found  ", "request_id": "req-60cda697-ec98-4de5-bc67-08a5c5cb21e8"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '29c97280-aaf3-4c7f-a78a-1c9e8d025371' not found: No resource provider with uuid 29c97280-aaf3-4c7f-a78a-1c9e8d025371 found  ", "request_id": "req-60cda697-ec98-4de5-bc67-08a5c5cb21e8"}]}#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.212 233728 DEBUG oslo_concurrency.lockutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.213 233728 DEBUG oslo_concurrency.lockutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.213 233728 DEBUG oslo_concurrency.lockutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.213 233728 DEBUG nova.compute.resource_tracker [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.214 233728 DEBUG oslo_concurrency.processutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1703730179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.789 233728 DEBUG oslo_concurrency.processutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:00 np0005539552 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 02:44:00 np0005539552 systemd[1]: session-51.scope: Consumed 2min 19.634s CPU time.
Nov 29 02:44:00 np0005539552 systemd-logind[788]: Session 51 logged out. Waiting for processes to exit.
Nov 29 02:44:00 np0005539552 systemd-logind[788]: Removed session 51.
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.945 233728 WARNING nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.946 233728 DEBUG nova.compute.resource_tracker [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5284MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.947 233728 DEBUG oslo_concurrency.lockutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:00 np0005539552 nova_compute[233724]: 2025-11-29 07:44:00.947 233728 DEBUG oslo_concurrency.lockutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:01.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:01 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:02 np0005539552 nova_compute[233724]: 2025-11-29 07:44:02.483 233728 ERROR nova.compute.resource_tracker [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '29c97280-aaf3-4c7f-a78a-1c9e8d025371' not found: No resource provider with uuid 29c97280-aaf3-4c7f-a78a-1c9e8d025371 found  ", "request_id": "req-65746d9b-2051-4b07-a68e-3bb3713f0e4d"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '29c97280-aaf3-4c7f-a78a-1c9e8d025371' not found: No resource provider with uuid 29c97280-aaf3-4c7f-a78a-1c9e8d025371 found  ", "request_id": "req-65746d9b-2051-4b07-a68e-3bb3713f0e4d"}]}#033[00m
Nov 29 02:44:02 np0005539552 nova_compute[233724]: 2025-11-29 07:44:02.484 233728 DEBUG nova.compute.resource_tracker [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:44:02 np0005539552 nova_compute[233724]: 2025-11-29 07:44:02.484 233728 DEBUG nova.compute.resource_tracker [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:44:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:02 np0005539552 nova_compute[233724]: 2025-11-29 07:44:02.639 233728 INFO nova.scheduler.client.report [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [req-9169b9bb-de9c-4952-8d9b-f3751bd107e4] Created resource provider record via placement API for resource provider with UUID 29c97280-aaf3-4c7f-a78a-1c9e8d025371 and name compute-2.ctlplane.example.com.#033[00m
Nov 29 02:44:02 np0005539552 nova_compute[233724]: 2025-11-29 07:44:02.659 233728 DEBUG oslo_concurrency.processutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:02.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:03.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4151174693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.085 233728 DEBUG oslo_concurrency.processutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.091 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 02:44:03 np0005539552 nova_compute[233724]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.091 233728 INFO nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.092 233728 DEBUG nova.compute.provider_tree [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.093 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.095 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 02:44:03 np0005539552 nova_compute[233724]:  <arch>x86_64</arch>
Nov 29 02:44:03 np0005539552 nova_compute[233724]:  <model>Nehalem</model>
Nov 29 02:44:03 np0005539552 nova_compute[233724]:  <vendor>AMD</vendor>
Nov 29 02:44:03 np0005539552 nova_compute[233724]:  <topology sockets="8" cores="1" threads="1"/>
Nov 29 02:44:03 np0005539552 nova_compute[233724]: </cpu>
Nov 29 02:44:03 np0005539552 nova_compute[233724]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.286 233728 DEBUG nova.scheduler.client.report [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Updated inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.286 233728 DEBUG nova.compute.provider_tree [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Updating resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.286 233728 DEBUG nova.compute.provider_tree [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.395 233728 DEBUG nova.compute.provider_tree [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Updating resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.507 233728 DEBUG nova.compute.resource_tracker [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.507 233728 DEBUG oslo_concurrency.lockutils [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.508 233728 DEBUG nova.service [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.767 233728 DEBUG nova.service [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 02:44:03 np0005539552 nova_compute[233724]: 2025-11-29 07:44:03.768 233728 DEBUG nova.servicegroup.drivers.db [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 02:44:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:05.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:06 np0005539552 podman[234176]: 2025-11-29 07:44:06.968465461 +0000 UTC m=+0.050295405 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:44:06 np0005539552 podman[234175]: 2025-11-29 07:44:06.975834948 +0000 UTC m=+0.060646022 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 02:44:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:07.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:09.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:10.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:11 np0005539552 podman[234212]: 2025-11-29 07:44:11.033394998 +0000 UTC m=+0.113268369 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:44:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:12.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:13.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:16.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.124422) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259124492, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 3339, "num_deletes": 502, "total_data_size": 7986372, "memory_usage": 8093632, "flush_reason": "Manual Compaction"}
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259888598, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 5160983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15783, "largest_seqno": 19117, "table_properties": {"data_size": 5148064, "index_size": 8133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 29764, "raw_average_key_size": 20, "raw_value_size": 5119991, "raw_average_value_size": 3466, "num_data_blocks": 361, "num_entries": 1477, "num_filter_entries": 1477, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401835, "oldest_key_time": 1764401835, "file_creation_time": 1764402259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 764236 microseconds, and 12014 cpu microseconds.
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.888667) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 5160983 bytes OK
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.888683) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.895914) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.895948) EVENT_LOG_v1 {"time_micros": 1764402259895940, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.895967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 7971245, prev total WAL file size 8038179, number of live WAL files 2.
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.897845) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(5040KB)], [33(9674KB)]
Nov 29 02:44:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259897877, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 15067961, "oldest_snapshot_seqno": -1}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5093 keys, 10632555 bytes, temperature: kUnknown
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260038019, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 10632555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10595897, "index_size": 22852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 127545, "raw_average_key_size": 25, "raw_value_size": 10501190, "raw_average_value_size": 2061, "num_data_blocks": 956, "num_entries": 5093, "num_filter_entries": 5093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764402259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.038770) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 10632555 bytes
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.040803) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.4 rd, 75.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 9.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(5.0) write-amplify(2.1) OK, records in: 6122, records dropped: 1029 output_compression: NoCompression
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.040841) EVENT_LOG_v1 {"time_micros": 1764402260040824, "job": 18, "event": "compaction_finished", "compaction_time_micros": 140355, "compaction_time_cpu_micros": 23509, "output_level": 6, "num_output_files": 1, "total_output_size": 10632555, "num_input_records": 6122, "num_output_records": 5093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260041943, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260044573, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:19.897756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.044712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.044719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.044721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.044723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.044725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.045127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260045188, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 256, "total_data_size": 69049, "memory_usage": 76184, "flush_reason": "Manual Compaction"}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260048724, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 45593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19119, "largest_seqno": 19386, "table_properties": {"data_size": 43720, "index_size": 102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4300, "raw_average_key_size": 16, "raw_value_size": 40131, "raw_average_value_size": 149, "num_data_blocks": 4, "num_entries": 268, "num_filter_entries": 268, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402259, "oldest_key_time": 1764402259, "file_creation_time": 1764402260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3642 microseconds, and 1105 cpu microseconds.
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.048774) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 45593 bytes OK
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.048799) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.050089) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.050106) EVENT_LOG_v1 {"time_micros": 1764402260050100, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.050130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 66934, prev total WAL file size 66934, number of live WAL files 2.
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.050596) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(44KB)], [36(10MB)]
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260050665, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 10678148, "oldest_snapshot_seqno": -1}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4842 keys, 10253341 bytes, temperature: kUnknown
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260357423, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 10253341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10218701, "index_size": 21410, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 123649, "raw_average_key_size": 25, "raw_value_size": 10128667, "raw_average_value_size": 2091, "num_data_blocks": 878, "num_entries": 4842, "num_filter_entries": 4842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764402260, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.390474) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10253341 bytes
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.393539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.8 rd, 33.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(459.1) write-amplify(224.9) OK, records in: 5361, records dropped: 519 output_compression: NoCompression
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.393587) EVENT_LOG_v1 {"time_micros": 1764402260393570, "job": 20, "event": "compaction_finished", "compaction_time_micros": 306814, "compaction_time_cpu_micros": 31316, "output_level": 6, "num_output_files": 1, "total_output_size": 10253341, "num_input_records": 5361, "num_output_records": 4842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260393821, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402260396998, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.050496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.397049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.397056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.397059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.397062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:44:20.397065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:44:20.593 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:44:20.594 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:44:20.594 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:20.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:21.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:24.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:30.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:31.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:33.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:35.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:37 np0005539552 podman[234303]: 2025-11-29 07:44:37.958398578 +0000 UTC m=+0.052373121 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 29 02:44:37 np0005539552 podman[234304]: 2025-11-29 07:44:37.961281475 +0000 UTC m=+0.051327683 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 02:44:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:38.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:41 np0005539552 podman[234367]: 2025-11-29 07:44:41.184443886 +0000 UTC m=+0.087172171 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Nov 29 02:44:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:42.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:43.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:44:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321841197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:44:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:44:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321841197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:44:44 np0005539552 nova_compute[233724]: 2025-11-29 07:44:44.770 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:44 np0005539552 nova_compute[233724]: 2025-11-29 07:44:44.833 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:44:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244030242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:44:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:44:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244030242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:44:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:46.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:48.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:51.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:53.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:54.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:57 np0005539552 nova_compute[233724]: 2025-11-29 07:44:57.926 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:57 np0005539552 nova_compute[233724]: 2025-11-29 07:44:57.926 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:57 np0005539552 nova_compute[233724]: 2025-11-29 07:44:57.927 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:44:57 np0005539552 nova_compute[233724]: 2025-11-29 07:44:57.927 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.663 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.664 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.664 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.665 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.665 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.665 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.665 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.666 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.666 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.979 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.980 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.980 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.981 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:58 np0005539552 nova_compute[233724]: 2025-11-29 07:44:58.981 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:44:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/705577900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:59 np0005539552 nova_compute[233724]: 2025-11-29 07:44:59.441 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:59 np0005539552 nova_compute[233724]: 2025-11-29 07:44:59.624 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:59 np0005539552 nova_compute[233724]: 2025-11-29 07:44:59.625 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5303MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:59 np0005539552 nova_compute[233724]: 2025-11-29 07:44:59.625 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:59 np0005539552 nova_compute[233724]: 2025-11-29 07:44:59.626 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:00.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:01.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:02.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:03.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:05 np0005539552 nova_compute[233724]: 2025-11-29 07:45:05.028 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:45:05 np0005539552 nova_compute[233724]: 2025-11-29 07:45:05.028 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:45:05 np0005539552 nova_compute[233724]: 2025-11-29 07:45:05.077 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:05.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2533741022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:05 np0005539552 nova_compute[233724]: 2025-11-29 07:45:05.593 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:05 np0005539552 nova_compute[233724]: 2025-11-29 07:45:05.604 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:07 np0005539552 nova_compute[233724]: 2025-11-29 07:45:07.309 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:07 np0005539552 nova_compute[233724]: 2025-11-29 07:45:07.311 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:45:07 np0005539552 nova_compute[233724]: 2025-11-29 07:45:07.312 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:45:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 3507 writes, 19K keys, 3507 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 3507 writes, 3507 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1102 writes, 5301 keys, 1102 commit groups, 1.0 writes per commit group, ingest: 12.05 MB, 0.02 MB/s#012Interval WAL: 1102 writes, 1102 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.0      1.86              0.07        10    0.186       0      0       0.0       0.0#012  L6      1/0    9.78 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8     62.0     53.6      1.72              0.25         9    0.191     44K   4545       0.0       0.0#012 Sum      1/0    9.78 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.8     29.8     32.6      3.58              0.32        19    0.188     44K   4545       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     28.4     28.4      1.29              0.10         6    0.215     16K   2002       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     62.0     53.6      1.72              0.25         9    0.191     44K   4545       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      1.81              0.07         9    0.202       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.1 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.06 MB/s write, 0.10 GB read, 0.06 MB/s read, 3.6 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 7.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 7.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(415,7.08 MB,2.32922%) FilterBlock(19,128.05 KB,0.0411335%) IndexBlock(19,265.47 KB,0.0852786%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:45:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:08 np0005539552 podman[234896]: 2025-11-29 07:45:08.983753595 +0000 UTC m=+0.063546460 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:45:09 np0005539552 podman[234897]: 2025-11-29 07:45:09.011659211 +0000 UTC m=+0.091420645 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:45:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:11.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539552 podman[234938]: 2025-11-29 07:45:11.993611983 +0000 UTC m=+0.086226096 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:45:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:12.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:13.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:14.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:16.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:18.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:45:20.594 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:45:20.595 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:45:20.595 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:20.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:22.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:23.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:24.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:25.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:28.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:29.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:45:30 np0005539552 nova_compute[233724]: 2025-11-29 07:45:30.316 233728 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.54 sec#033[00m
Nov 29 02:45:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:30.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:45:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:32.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:33.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:35.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:36.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:37.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:38.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:39 np0005539552 podman[235080]: 2025-11-29 07:45:39.974237852 +0000 UTC m=+0.060462317 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 02:45:39 np0005539552 podman[235079]: 2025-11-29 07:45:39.975085095 +0000 UTC m=+0.064113355 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:45:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:40.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:41.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:42.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:43 np0005539552 podman[235168]: 2025-11-29 07:45:43.038049801 +0000 UTC m=+0.115152750 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 02:45:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:43.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:45:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:44.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:45.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:51.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:45:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 5965 writes, 24K keys, 5965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5965 writes, 1088 syncs, 5.48 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 379 writes, 574 keys, 379 commit groups, 1.0 writes per commit group, ingest: 0.18 MB, 0.00 MB/s#012Interval WAL: 379 writes, 174 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:45:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:53.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:54.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:45:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:59.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:01.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:02.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:03.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:04.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:05.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:06.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:07.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.304 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.305 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.685 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.685 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.685 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.753 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.753 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.754 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.754 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.754 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.754 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.754 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.755 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.755 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.835 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.836 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.836 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.837 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:46:07 np0005539552 nova_compute[233724]: 2025-11-29 07:46:07.837 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3155065240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.298 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.458 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.459 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5294MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.459 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.460 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.637 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.637 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:46:08 np0005539552 nova_compute[233724]: 2025-11-29 07:46:08.655 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:08.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3496743474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:09 np0005539552 nova_compute[233724]: 2025-11-29 07:46:09.082 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:09 np0005539552 nova_compute[233724]: 2025-11-29 07:46:09.090 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:09 np0005539552 nova_compute[233724]: 2025-11-29 07:46:09.250 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:09 np0005539552 nova_compute[233724]: 2025-11-29 07:46:09.252 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:46:09 np0005539552 nova_compute[233724]: 2025-11-29 07:46:09.252 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:09.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:10 np0005539552 podman[235302]: 2025-11-29 07:46:10.967934165 +0000 UTC m=+0.059917813 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:46:10 np0005539552 podman[235303]: 2025-11-29 07:46:10.987408306 +0000 UTC m=+0.068406790 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:46:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:11.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:13 np0005539552 podman[235341]: 2025-11-29 07:46:13.988759046 +0000 UTC m=+0.078891601 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:46:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:14.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:16.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:46:20.595 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:46:20.596 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:46:20.596 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:20.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:21.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:22.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:23.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:26.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:27.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:29.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:31.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:32.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:34.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:34 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 02:46:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:35.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:46:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:37.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539552 podman[235702]: 2025-11-29 07:46:37.682280357 +0000 UTC m=+0.037238657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:46:38 np0005539552 podman[235702]: 2025-11-29 07:46:38.013458032 +0000 UTC m=+0.368416332 container create e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:38 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:46:38 np0005539552 systemd[1]: Started libpod-conmon-e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027.scope.
Nov 29 02:46:38 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:46:38 np0005539552 podman[235702]: 2025-11-29 07:46:38.541924921 +0000 UTC m=+0.896883181 container init e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:46:38 np0005539552 podman[235702]: 2025-11-29 07:46:38.548446306 +0000 UTC m=+0.903404586 container start e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:46:38 np0005539552 awesome_mcclintock[235718]: 167 167
Nov 29 02:46:38 np0005539552 systemd[1]: libpod-e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027.scope: Deactivated successfully.
Nov 29 02:46:38 np0005539552 conmon[235718]: conmon e3df26d4ae56e25d8558 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027.scope/container/memory.events
Nov 29 02:46:38 np0005539552 podman[235702]: 2025-11-29 07:46:38.636179372 +0000 UTC m=+0.991137632 container attach e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:38 np0005539552 podman[235702]: 2025-11-29 07:46:38.636689485 +0000 UTC m=+0.991647745 container died e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:46:38 np0005539552 systemd[1]: var-lib-containers-storage-overlay-aa37c21b23d51784f981d93c081d8d855df16c4c088e9974273fa4334cd131f9-merged.mount: Deactivated successfully.
Nov 29 02:46:38 np0005539552 podman[235702]: 2025-11-29 07:46:38.793305123 +0000 UTC m=+1.148263393 container remove e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mcclintock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:46:38 np0005539552 systemd[1]: libpod-conmon-e3df26d4ae56e25d8558709d03f6ce8354ce131ffa084853a206d4b88a25b027.scope: Deactivated successfully.
Nov 29 02:46:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:38.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:39 np0005539552 podman[235743]: 2025-11-29 07:46:38.958171721 +0000 UTC m=+0.023492709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:39 np0005539552 podman[235743]: 2025-11-29 07:46:39.160481671 +0000 UTC m=+0.225802579 container create ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:39.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:46:39 np0005539552 systemd[1]: Started libpod-conmon-ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623.scope.
Nov 29 02:46:39 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:46:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41dd90bf3efd218e11207701c1448ca8c017e92113a9c045f2b51e3d5d0ed97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41dd90bf3efd218e11207701c1448ca8c017e92113a9c045f2b51e3d5d0ed97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41dd90bf3efd218e11207701c1448ca8c017e92113a9c045f2b51e3d5d0ed97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41dd90bf3efd218e11207701c1448ca8c017e92113a9c045f2b51e3d5d0ed97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539552 podman[235743]: 2025-11-29 07:46:39.499246619 +0000 UTC m=+0.564567547 container init ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:39 np0005539552 podman[235743]: 2025-11-29 07:46:39.50678377 +0000 UTC m=+0.572104688 container start ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:39 np0005539552 podman[235743]: 2025-11-29 07:46:39.516472349 +0000 UTC m=+0.581793257 container attach ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:46:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:41.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]: [
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:    {
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "available": false,
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "ceph_device": false,
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "lsm_data": {},
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "lvs": [],
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "path": "/dev/sr0",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "rejected_reasons": [
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "Has a FileSystem",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "Insufficient space (<5GB)"
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        ],
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        "sys_api": {
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "actuators": null,
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "device_nodes": "sr0",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "devname": "sr0",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "human_readable_size": "482.00 KB",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "id_bus": "ata",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "model": "QEMU DVD-ROM",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "nr_requests": "2",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "parent": "/dev/sr0",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "partitions": {},
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "path": "/dev/sr0",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "removable": "1",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "rev": "2.5+",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "ro": "0",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "rotational": "1",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "sas_address": "",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "sas_device_handle": "",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "scheduler_mode": "mq-deadline",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "sectors": 0,
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "sectorsize": "2048",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "size": 493568.0,
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "support_discard": "2048",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "type": "disk",
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:            "vendor": "QEMU"
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:        }
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]:    }
Nov 29 02:46:41 np0005539552 wizardly_boyd[235760]: ]
Nov 29 02:46:41 np0005539552 systemd[1]: libpod-ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623.scope: Deactivated successfully.
Nov 29 02:46:41 np0005539552 systemd[1]: libpod-ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623.scope: Consumed 1.423s CPU time.
Nov 29 02:46:41 np0005539552 podman[235743]: 2025-11-29 07:46:41.39358388 +0000 UTC m=+2.458904808 container died ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay-e41dd90bf3efd218e11207701c1448ca8c017e92113a9c045f2b51e3d5d0ed97-merged.mount: Deactivated successfully.
Nov 29 02:46:41 np0005539552 podman[235743]: 2025-11-29 07:46:41.952230917 +0000 UTC m=+3.017551835 container remove ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:46:41 np0005539552 systemd[1]: libpod-conmon-ab1f1931141601be381032726b7c0f17733f408faba7c03608b54c55d76b8623.scope: Deactivated successfully.
Nov 29 02:46:42 np0005539552 podman[236870]: 2025-11-29 07:46:42.07875668 +0000 UTC m=+0.648034148 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:46:42 np0005539552 podman[236864]: 2025-11-29 07:46:42.135390244 +0000 UTC m=+0.704893968 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 02:46:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:43.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:45 np0005539552 podman[236965]: 2025-11-29 07:46:45.024040841 +0000 UTC m=+0.105486991 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:46:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:49.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:51.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:46:51.977 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:46:51.978 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:46:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:46:51.979 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:54.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:56.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:57.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:58.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:46:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:00.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:03.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:03.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:47:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:05.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:47:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:05.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:07.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:09.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.254 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.255 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.255 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.255 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.389 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.390 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.391 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.391 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.391 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.392 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.392 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.392 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.393 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.460 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.461 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.461 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.462 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.462 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:09.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2310727025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:09 np0005539552 nova_compute[233724]: 2025-11-29 07:47:09.916 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.075 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.076 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5277MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.077 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.077 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.150 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.150 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.178 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2877798295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.632 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.638 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.652 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.653 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:47:10 np0005539552 nova_compute[233724]: 2025-11-29 07:47:10.654 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:11.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:11.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:12 np0005539552 podman[237151]: 2025-11-29 07:47:12.989276739 +0000 UTC m=+0.067343721 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:47:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:13.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:13 np0005539552 podman[237150]: 2025-11-29 07:47:13.027723807 +0000 UTC m=+0.104965757 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 02:47:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:13.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:15.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:16 np0005539552 podman[237189]: 2025-11-29 07:47:16.02210277 +0000 UTC m=+0.100923939 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 02:47:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:17.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:19.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:47:20.597 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:47:20.597 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:47:20.597 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:47:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:47:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:23.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:23.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:25.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:27.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:31.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:33.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:37.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:39.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:47:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2901022360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:47:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:47:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2901022360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:47:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:39.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:41.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:43.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:43 np0005539552 podman[237333]: 2025-11-29 07:47:43.967535377 +0000 UTC m=+0.050506910 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:47:43 np0005539552 podman[237332]: 2025-11-29 07:47:43.974535017 +0000 UTC m=+0.059939746 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:47:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:47:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:45.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:47:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:46 np0005539552 podman[237373]: 2025-11-29 07:47:46.994798628 +0000 UTC m=+0.086915956 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:47:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:47.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:49.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:47:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:47:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:51.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:52 np0005539552 podman[237577]: 2025-11-29 07:47:52.287653297 +0000 UTC m=+0.269264309 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:47:52 np0005539552 podman[237597]: 2025-11-29 07:47:52.510819776 +0000 UTC m=+0.078446327 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:47:52 np0005539552 podman[237577]: 2025-11-29 07:47:52.644574952 +0000 UTC m=+0.626185924 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:47:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:53.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:53 np0005539552 podman[237733]: 2025-11-29 07:47:53.685380691 +0000 UTC m=+0.283722821 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:47:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:53.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:53 np0005539552 podman[237753]: 2025-11-29 07:47:53.851829153 +0000 UTC m=+0.141957369 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.142336) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474142443, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2159, "num_deletes": 250, "total_data_size": 5773960, "memory_usage": 5848600, "flush_reason": "Manual Compaction"}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 29 02:47:54 np0005539552 podman[237733]: 2025-11-29 07:47:54.281890809 +0000 UTC m=+0.880232899 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474420535, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 2234693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19391, "largest_seqno": 21545, "table_properties": {"data_size": 2228089, "index_size": 3483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16324, "raw_average_key_size": 20, "raw_value_size": 2213785, "raw_average_value_size": 2788, "num_data_blocks": 157, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402260, "oldest_key_time": 1764402260, "file_creation_time": 1764402474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 278220 microseconds, and 9200 cpu microseconds.
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.420572) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 2234693 bytes OK
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.420589) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.431493) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.431541) EVENT_LOG_v1 {"time_micros": 1764402474431531, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.431565) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 5764345, prev total WAL file size 5764345, number of live WAL files 2.
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.432765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2182KB)], [39(10013KB)]
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474432812, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 12488034, "oldest_snapshot_seqno": -1}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5217 keys, 10129437 bytes, temperature: kUnknown
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474899735, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 10129437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10093985, "index_size": 21326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131691, "raw_average_key_size": 25, "raw_value_size": 9999032, "raw_average_value_size": 1916, "num_data_blocks": 878, "num_entries": 5217, "num_filter_entries": 5217, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764402474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.900027) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 10129437 bytes
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.955161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.7 rd, 21.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(10.1) write-amplify(4.5) OK, records in: 5636, records dropped: 419 output_compression: NoCompression
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.955199) EVENT_LOG_v1 {"time_micros": 1764402474955182, "job": 22, "event": "compaction_finished", "compaction_time_micros": 467009, "compaction_time_cpu_micros": 23899, "output_level": 6, "num_output_files": 1, "total_output_size": 10129437, "num_input_records": 5636, "num_output_records": 5217, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474956126, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402474959852, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.432713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.959937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.959943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.959945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.959946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:47:54.959948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:47:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:55 np0005539552 podman[237799]: 2025-11-29 07:47:55.319071861 +0000 UTC m=+0.746178406 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 02:47:55 np0005539552 podman[237820]: 2025-11-29 07:47:55.458808819 +0000 UTC m=+0.108986345 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Nov 29 02:47:55 np0005539552 podman[237799]: 2025-11-29 07:47:55.508010542 +0000 UTC m=+0.935117067 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, version=2.2.4, release=1793, io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Nov 29 02:47:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:55.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:47:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:47:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:57.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:47:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:47:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:47:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:47:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:59.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:47:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:01.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:03.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:04 np0005539552 nova_compute[233724]: 2025-11-29 07:48:04.319 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:04 np0005539552 nova_compute[233724]: 2025-11-29 07:48:04.320 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:05.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:48:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:48:06 np0005539552 nova_compute[233724]: 2025-11-29 07:48:06.689 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:06 np0005539552 nova_compute[233724]: 2025-11-29 07:48:06.690 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:48:06 np0005539552 nova_compute[233724]: 2025-11-29 07:48:06.690 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:48:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:07.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:07.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:09.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.363 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.364 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.364 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.365 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.365 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.365 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.365 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.366 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.366 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.425 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.426 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.426 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.426 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:48:09 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.426 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:09.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2562154775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:09.999 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.163 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.164 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5276MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.164 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.165 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.683 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.683 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:48:10 np0005539552 nova_compute[233724]: 2025-11-29 07:48:10.702 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3311985421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:11 np0005539552 nova_compute[233724]: 2025-11-29 07:48:11.106 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:11 np0005539552 nova_compute[233724]: 2025-11-29 07:48:11.111 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:11 np0005539552 nova_compute[233724]: 2025-11-29 07:48:11.429 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:11 np0005539552 nova_compute[233724]: 2025-11-29 07:48:11.431 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:48:11 np0005539552 nova_compute[233724]: 2025-11-29 07:48:11.431 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:11.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:13.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:13.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.500160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494500273, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 487, "num_deletes": 251, "total_data_size": 669382, "memory_usage": 679752, "flush_reason": "Manual Compaction"}
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494684936, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 442019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21550, "largest_seqno": 22032, "table_properties": {"data_size": 439231, "index_size": 824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6731, "raw_average_key_size": 19, "raw_value_size": 433656, "raw_average_value_size": 1249, "num_data_blocks": 35, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402475, "oldest_key_time": 1764402475, "file_creation_time": 1764402494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 184810 microseconds, and 2063 cpu microseconds.
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.684981) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 442019 bytes OK
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.685005) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.875953) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.876021) EVENT_LOG_v1 {"time_micros": 1764402494875979, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.876040) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 666406, prev total WAL file size 667044, number of live WAL files 2.
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.876566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(431KB)], [42(9892KB)]
Nov 29 02:48:14 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402494876649, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 10571456, "oldest_snapshot_seqno": -1}
Nov 29 02:48:14 np0005539552 podman[238118]: 2025-11-29 07:48:14.973447056 +0000 UTC m=+0.055239938 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:48:14 np0005539552 podman[238117]: 2025-11-29 07:48:14.98135335 +0000 UTC m=+0.067156001 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:48:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:15.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5047 keys, 8208149 bytes, temperature: kUnknown
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402495268687, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 8208149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8175326, "index_size": 19081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 128852, "raw_average_key_size": 25, "raw_value_size": 8084710, "raw_average_value_size": 1601, "num_data_blocks": 777, "num_entries": 5047, "num_filter_entries": 5047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764402494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.268987) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8208149 bytes
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.271295) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.0 rd, 20.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(42.5) write-amplify(18.6) OK, records in: 5564, records dropped: 517 output_compression: NoCompression
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.271323) EVENT_LOG_v1 {"time_micros": 1764402495271310, "job": 24, "event": "compaction_finished", "compaction_time_micros": 392125, "compaction_time_cpu_micros": 29947, "output_level": 6, "num_output_files": 1, "total_output_size": 8208149, "num_input_records": 5564, "num_output_records": 5047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402495271586, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402495274731, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:14.876469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.274818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.274823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.274825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.274826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:48:15.274828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:48:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:17.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:18 np0005539552 podman[238154]: 2025-11-29 07:48:18.023052414 +0000 UTC m=+0.110375753 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:48:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=404 latency=0.001000027s ======
Nov 29 02:48:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:18.754 +0000] "GET /healthcheck HTTP/1.1" 404 240 - "python-urllib3/1.26.5" - latency=0.001000027s
Nov 29 02:48:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:48:20.598 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:48:20.598 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:48:20.598 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:21.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:21.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:23.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:23.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:25.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:25.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:27.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 02:48:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:29.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:29.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 02:48:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:31.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:31.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:33.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 02:48:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:33.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:35.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:37.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:48:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:48:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:48:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2700872414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:48:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:48:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2700872414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:48:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 02:48:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:39.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:41.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:41.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:43.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 02:48:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:45.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:45.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:45 np0005539552 podman[238295]: 2025-11-29 07:48:45.990238518 +0000 UTC m=+0.070615235 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 02:48:45 np0005539552 podman[238296]: 2025-11-29 07:48:45.997171956 +0000 UTC m=+0.081872980 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 02:48:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:47.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:48 np0005539552 podman[238337]: 2025-11-29 07:48:48.996634224 +0000 UTC m=+0.081319905 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:48:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:49.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:49.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:51.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:51.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:53.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:53.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:55.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:55.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:57.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:57.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539552 nova_compute[233724]: 2025-11-29 07:48:57.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:57 np0005539552 nova_compute[233724]: 2025-11-29 07:48:57.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:48:58 np0005539552 nova_compute[233724]: 2025-11-29 07:48:58.265 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:48:58 np0005539552 nova_compute[233724]: 2025-11-29 07:48:58.266 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:58 np0005539552 nova_compute[233724]: 2025-11-29 07:48:58.266 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:48:58 np0005539552 nova_compute[233724]: 2025-11-29 07:48:58.656 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:48:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:59.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:01.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:49:01.245 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:49:01.246 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.695 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.695 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.696 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.696 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.718 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.719 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.719 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:01.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:01 np0005539552 nova_compute[233724]: 2025-11-29 07:49:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.952 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.952 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:49:02 np0005539552 nova_compute[233724]: 2025-11-29 07:49:02.953 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:03.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2545465529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:03 np0005539552 nova_compute[233724]: 2025-11-29 07:49:03.406 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:03 np0005539552 nova_compute[233724]: 2025-11-29 07:49:03.578 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:03 np0005539552 nova_compute[233724]: 2025-11-29 07:49:03.581 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5285MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:49:03 np0005539552 nova_compute[233724]: 2025-11-29 07:49:03.582 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:03 np0005539552 nova_compute[233724]: 2025-11-29 07:49:03.582 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:03.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.420 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.420 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.486 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.517 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.518 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.535 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.579 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:49:04 np0005539552 nova_compute[233724]: 2025-11-29 07:49:04.597 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/246004647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:05 np0005539552 nova_compute[233724]: 2025-11-29 07:49:05.042 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:05 np0005539552 nova_compute[233724]: 2025-11-29 07:49:05.047 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:05 np0005539552 nova_compute[233724]: 2025-11-29 07:49:05.067 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:05 np0005539552 nova_compute[233724]: 2025-11-29 07:49:05.068 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:49:05 np0005539552 nova_compute[233724]: 2025-11-29 07:49:05.068 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:05.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:07 np0005539552 nova_compute[233724]: 2025-11-29 07:49:07.069 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:49:07.248 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:07.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:49:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:09.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:49:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:49:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:11.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 02:49:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:13.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 02:49:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:15.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 02:49:16 np0005539552 podman[238606]: 2025-11-29 07:49:16.972985078 +0000 UTC m=+0.057054527 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:49:16 np0005539552 podman[238605]: 2025-11-29 07:49:16.99664349 +0000 UTC m=+0.085115678 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:49:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:49:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:49:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:19.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:20 np0005539552 podman[238695]: 2025-11-29 07:49:20.029570154 +0000 UTC m=+0.113747465 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:49:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:49:20.598 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:49:20.599 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:49:20.599 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:21.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:23.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 02:49:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:49:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:49:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:25.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:27.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:27.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:31.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:32 np0005539552 nova_compute[233724]: 2025-11-29 07:49:32.922 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:32 np0005539552 nova_compute[233724]: 2025-11-29 07:49:32.923 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:32 np0005539552 nova_compute[233724]: 2025-11-29 07:49:32.961 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.062 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.062 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.068 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.069 233728 INFO nova.compute.claims [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:49:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:33.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.223 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2360073625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.682 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.692 233728 DEBUG nova.compute.provider_tree [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.867 233728 DEBUG nova.scheduler.client.report [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:33.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.893 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.894 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.939 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.939 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.959 233728 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:49:33 np0005539552 nova_compute[233724]: 2025-11-29 07:49:33.983 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.067 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.068 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.069 233728 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Creating image(s)#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.100 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.135 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.163 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.167 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:34 np0005539552 nova_compute[233724]: 2025-11-29 07:49:34.168 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:35.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:35 np0005539552 nova_compute[233724]: 2025-11-29 07:49:35.488 233728 DEBUG nova.virt.libvirt.imagebackend [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/4873db8c-b414-4e95-acd9-77caabebe722/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/4873db8c-b414-4e95-acd9-77caabebe722/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:49:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:35 np0005539552 nova_compute[233724]: 2025-11-29 07:49:35.666 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Automatically allocating a network for project 3c7cd563ba394223a76bd2579800406c. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Nov 29 02:49:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:35.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:39.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 02:49:39 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:49:39 np0005539552 nova_compute[233724]: 2025-11-29 07:49:39.865 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:39 np0005539552 nova_compute[233724]: 2025-11-29 07:49:39.925 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:39 np0005539552 nova_compute[233724]: 2025-11-29 07:49:39.927 233728 DEBUG nova.virt.images [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] 4873db8c-b414-4e95-acd9-77caabebe722 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:49:39 np0005539552 nova_compute[233724]: 2025-11-29 07:49:39.928 233728 DEBUG nova.privsep.utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:49:39 np0005539552 nova_compute[233724]: 2025-11-29 07:49:39.928 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.151 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.part /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.157 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.207 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.209 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 6.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.240 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.244 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 2519f959-bfbf-49b6-b80d-eff80129064b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.830 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 2519f959-bfbf-49b6-b80d-eff80129064b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:40 np0005539552 nova_compute[233724]: 2025-11-29 07:49:40.904 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] resizing rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:49:41 np0005539552 nova_compute[233724]: 2025-11-29 07:49:41.144 233728 DEBUG nova.objects.instance [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'migration_context' on Instance uuid 2519f959-bfbf-49b6-b80d-eff80129064b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:41 np0005539552 nova_compute[233724]: 2025-11-29 07:49:41.166 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:49:41 np0005539552 nova_compute[233724]: 2025-11-29 07:49:41.167 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Ensure instance console log exists: /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:49:41 np0005539552 nova_compute[233724]: 2025-11-29 07:49:41.167 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:41 np0005539552 nova_compute[233724]: 2025-11-29 07:49:41.167 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:41 np0005539552 nova_compute[233724]: 2025-11-29 07:49:41.168 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:41.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:43.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:45.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:45.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:47.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:48 np0005539552 podman[239036]: 2025-11-29 07:49:48.026327841 +0000 UTC m=+0.108828731 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:49:48 np0005539552 podman[239035]: 2025-11-29 07:49:48.037508724 +0000 UTC m=+0.129614174 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:49:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:49.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 02:49:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 02:49:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:50 np0005539552 podman[239077]: 2025-11-29 07:49:50.988666043 +0000 UTC m=+0.079270320 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:49:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:51.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:53.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:55.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:55.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:57.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:57.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 02:49:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:59.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:49:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:59.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:00 np0005539552 nova_compute[233724]: 2025-11-29 07:50:00.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:01.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:01 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 02:50:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:01.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:01 np0005539552 nova_compute[233724]: 2025-11-29 07:50:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.922 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.938 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.939 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:50:02 np0005539552 nova_compute[233724]: 2025-11-29 07:50:02.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:03 np0005539552 nova_compute[233724]: 2025-11-29 07:50:03.140 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Automatically allocated network: {'id': '01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'name': 'auto_allocated_network', 'tenant_id': '3c7cd563ba394223a76bd2579800406c', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['6992fe6c-c595-4393-8b65-e8b3c97ce60b', 'e8640056-71aa-4d1b-9c1e-9f992a064096'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-11-29T07:49:40Z', 'updated_at': '2025-11-29T07:49:53Z', 'revision_number': 4, 'project_id': '3c7cd563ba394223a76bd2579800406c'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Nov 29 02:50:03 np0005539552 nova_compute[233724]: 2025-11-29 07:50:03.148 233728 WARNING oslo_policy.policy [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:50:03 np0005539552 nova_compute[233724]: 2025-11-29 07:50:03.149 233728 WARNING oslo_policy.policy [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:50:03 np0005539552 nova_compute[233724]: 2025-11-29 07:50:03.151 233728 DEBUG nova.policy [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01739124bee74c899af6384f8ec2d427', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3c7cd563ba394223a76bd2579800406c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:50:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:03.373 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:03.374 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:50:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:04 np0005539552 nova_compute[233724]: 2025-11-29 07:50:04.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.011 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.011 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.012 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.012 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.012 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.050 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.051 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.051 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.051 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.052 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3671118217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.487 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.642 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.643 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5234MB free_disk=20.859630584716797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.643 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.644 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.864 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 2519f959-bfbf-49b6-b80d-eff80129064b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.865 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:50:05 np0005539552 nova_compute[233724]: 2025-11-29 07:50:05.865 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:50:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.079 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3072500679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.512 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.518 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.664 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Successfully created port: fc147d63-f6e7-4928-8991-4204b1b015bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.846 233728 ERROR nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [req-6545571f-3744-4fb2-bc2c-ea72b4c035de] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 29c97280-aaf3-4c7f-a78a-1c9e8d025371.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-6545571f-3744-4fb2-bc2c-ea72b4c035de"}]}#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.870 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.890 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.891 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.909 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.933 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:50:06 np0005539552 nova_compute[233724]: 2025-11-29 07:50:06.974 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:07.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:07 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3471797565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.405 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.410 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.657 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updated inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.657 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.657 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:50:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.984 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:50:07 np0005539552 nova_compute[233724]: 2025-11-29 07:50:07.985 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.584 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Successfully updated port: fc147d63-f6e7-4928-8991-4204b1b015bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.597 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "refresh_cache-2519f959-bfbf-49b6-b80d-eff80129064b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.597 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquired lock "refresh_cache-2519f959-bfbf-49b6-b80d-eff80129064b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.597 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.698 233728 DEBUG nova.compute.manager [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-changed-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.698 233728 DEBUG nova.compute.manager [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Refreshing instance network info cache due to event network-changed-fc147d63-f6e7-4928-8991-4204b1b015bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:10 np0005539552 nova_compute[233724]: 2025-11-29 07:50:10.698 233728 DEBUG oslo_concurrency.lockutils [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2519f959-bfbf-49b6-b80d-eff80129064b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:12.376 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:12 np0005539552 nova_compute[233724]: 2025-11-29 07:50:12.940 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.536 233728 DEBUG nova.network.neutron [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Updating instance_info_cache with network_info: [{"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.565 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Releasing lock "refresh_cache-2519f959-bfbf-49b6-b80d-eff80129064b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.566 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Instance network_info: |[{"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.566 233728 DEBUG oslo_concurrency.lockutils [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2519f959-bfbf-49b6-b80d-eff80129064b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.567 233728 DEBUG nova.network.neutron [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Refreshing network info cache for port fc147d63-f6e7-4928-8991-4204b1b015bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.572 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Start _get_guest_xml network_info=[{"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.577 233728 WARNING nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.581 233728 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.582 233728 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.585 233728 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.586 233728 DEBUG nova.virt.libvirt.host [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.588 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.588 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.589 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.589 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.589 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.590 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.590 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.590 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.590 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.591 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.591 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.591 233728 DEBUG nova.virt.hardware [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.596 233728 DEBUG nova.privsep.utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:50:14 np0005539552 nova_compute[233724]: 2025-11-29 07:50:14.596 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/636729410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.054 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.093 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.097 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/140132055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:15.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.979 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.881s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.981 233728 DEBUG nova.virt.libvirt.vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-2',id=4,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:34Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=2519f959-bfbf-49b6-b80d-eff80129064b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.981 233728 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.983 233728 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:15 np0005539552 nova_compute[233724]: 2025-11-29 07:50:15.984 233728 DEBUG nova.objects.instance [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2519f959-bfbf-49b6-b80d-eff80129064b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.040 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <uuid>2519f959-bfbf-49b6-b80d-eff80129064b</uuid>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <name>instance-00000004</name>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:name>tempest-tempest.common.compute-instance-497379200-2</nova:name>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:50:14</nova:creationTime>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:user uuid="01739124bee74c899af6384f8ec2d427">tempest-AutoAllocateNetworkTest-1372302389-project-member</nova:user>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:project uuid="3c7cd563ba394223a76bd2579800406c">tempest-AutoAllocateNetworkTest-1372302389</nova:project>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <nova:port uuid="fc147d63-f6e7-4928-8991-4204b1b015bc">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::357" ipVersion="6"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.1.0.82" ipVersion="4"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <entry name="serial">2519f959-bfbf-49b6-b80d-eff80129064b</entry>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <entry name="uuid">2519f959-bfbf-49b6-b80d-eff80129064b</entry>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/2519f959-bfbf-49b6-b80d-eff80129064b_disk">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/2519f959-bfbf-49b6-b80d-eff80129064b_disk.config">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:ad:42:7a"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <target dev="tapfc147d63-f6"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/console.log" append="off"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:50:16 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:50:16 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:50:16 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:50:16 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.041 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Preparing to wait for external event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.041 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.042 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.042 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.042 233728 DEBUG nova.virt.libvirt.vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-2',id=4,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:34Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=2519f959-bfbf-49b6-b80d-eff80129064b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.043 233728 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.043 233728 DEBUG nova.network.os_vif_util [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.043 233728 DEBUG os_vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.146 233728 DEBUG ovsdbapp.backend.ovs_idl [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.146 233728 DEBUG ovsdbapp.backend.ovs_idl [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.146 233728 DEBUG ovsdbapp.backend.ovs_idl [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.148 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.148 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.149 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.150 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.152 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.161 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.161 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.161 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.162 233728 INFO oslo.privsep.daemon [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpnnln0_t5/privsep.sock']#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.822 233728 INFO oslo.privsep.daemon [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.706 239298 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.711 239298 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.714 239298 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.714 239298 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239298#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.873 233728 DEBUG nova.network.neutron [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Updated VIF entry in instance network info cache for port fc147d63-f6e7-4928-8991-4204b1b015bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:16 np0005539552 nova_compute[233724]: 2025-11-29 07:50:16.874 233728 DEBUG nova.network.neutron [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Updating instance_info_cache with network_info: [{"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.169 233728 DEBUG oslo_concurrency.lockutils [req-aef2bd83-5ec1-4aed-a0ad-1350adf1aab0 req-bbcc508c-d08e-4b65-8ff7-6e8707fd933b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2519f959-bfbf-49b6-b80d-eff80129064b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.197 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc147d63-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.197 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc147d63-f6, col_values=(('external_ids', {'iface-id': 'fc147d63-f6e7-4928-8991-4204b1b015bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:42:7a', 'vm-uuid': '2519f959-bfbf-49b6-b80d-eff80129064b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.199 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:17 np0005539552 NetworkManager[48926]: <info>  [1764402617.2003] manager: (tapfc147d63-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.205 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.206 233728 INFO os_vif [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6')#033[00m
Nov 29 02:50:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:17.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.254 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.255 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.255 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] No VIF found with MAC fa:16:3e:ad:42:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.256 233728 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Using config drive#033[00m
Nov 29 02:50:17 np0005539552 nova_compute[233724]: 2025-11-29 07:50:17.503 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:18 np0005539552 nova_compute[233724]: 2025-11-29 07:50:18.136 233728 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Creating config drive at /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/disk.config#033[00m
Nov 29 02:50:18 np0005539552 nova_compute[233724]: 2025-11-29 07:50:18.142 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u3uim5v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:18 np0005539552 nova_compute[233724]: 2025-11-29 07:50:18.292 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u3uim5v" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:50:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:50:18 np0005539552 nova_compute[233724]: 2025-11-29 07:50:18.861 233728 DEBUG nova.storage.rbd_utils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] rbd image 2519f959-bfbf-49b6-b80d-eff80129064b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:18 np0005539552 nova_compute[233724]: 2025-11-29 07:50:18.865 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/disk.config 2519f959-bfbf-49b6-b80d-eff80129064b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:18 np0005539552 podman[239477]: 2025-11-29 07:50:18.964333492 +0000 UTC m=+0.053431199 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:50:18 np0005539552 podman[239476]: 2025-11-29 07:50:18.966148692 +0000 UTC m=+0.056852982 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.062 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:19.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.772 233728 DEBUG oslo_concurrency.processutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/disk.config 2519f959-bfbf-49b6-b80d-eff80129064b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.907s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.772 233728 INFO nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Deleting local config drive /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:19 np0005539552 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:50:19 np0005539552 systemd[1]: Started libvirt secret daemon.
Nov 29 02:50:19 np0005539552 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 02:50:19 np0005539552 kernel: tapfc147d63-f6: entered promiscuous mode
Nov 29 02:50:19 np0005539552 NetworkManager[48926]: <info>  [1764402619.8782] manager: (tapfc147d63-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.879 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:19Z|00027|binding|INFO|Claiming lport fc147d63-f6e7-4928-8991-4204b1b015bc for this chassis.
Nov 29 02:50:19 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:19Z|00028|binding|INFO|fc147d63-f6e7-4928-8991-4204b1b015bc: Claiming fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.886 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:19.899 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], port_security=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.82/26 fdfe:381f:8400:1::357/64', 'neutron:device_id': '2519f959-bfbf-49b6-b80d-eff80129064b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=fc147d63-f6e7-4928-8991-4204b1b015bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:19.900 143400 INFO neutron.agent.ovn.metadata.agent [-] Port fc147d63-f6e7-4928-8991-4204b1b015bc in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 bound to our chassis#033[00m
Nov 29 02:50:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:19.903 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363#033[00m
Nov 29 02:50:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:19.904 143400 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpyp9jnz1o/privsep.sock']#033[00m
Nov 29 02:50:19 np0005539552 systemd-udevd[239568]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:19 np0005539552 systemd-machined[196379]: New machine qemu-1-instance-00000004.
Nov 29 02:50:19 np0005539552 NetworkManager[48926]: <info>  [1764402619.9294] device (tapfc147d63-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:50:19 np0005539552 NetworkManager[48926]: <info>  [1764402619.9303] device (tapfc147d63-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:50:19 np0005539552 systemd[1]: Started Virtual Machine qemu-1-instance-00000004.
Nov 29 02:50:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:19 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:19Z|00029|binding|INFO|Setting lport fc147d63-f6e7-4928-8991-4204b1b015bc ovn-installed in OVS
Nov 29 02:50:19 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:19Z|00030|binding|INFO|Setting lport fc147d63-f6e7-4928-8991-4204b1b015bc up in Southbound
Nov 29 02:50:19 np0005539552 nova_compute[233724]: 2025-11-29 07:50:19.970 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.600 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.601 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.601 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.657 143400 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.657 143400 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyp9jnz1o/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.527 239589 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.535 239589 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.541 239589 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.541 239589 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239589#033[00m
Nov 29 02:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:20.660 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8872296c-a11c-42dd-a551-24fbe1be7e40]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:21 np0005539552 podman[239613]: 2025-11-29 07:50:21.34507297 +0000 UTC m=+0.118235395 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 02:50:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:21.361 239589 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:21.362 239589 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:21.362 239589 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.551 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402621.5508733, 2519f959-bfbf-49b6-b80d-eff80129064b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.552 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.573 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.577 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402621.5515041, 2519f959-bfbf-49b6-b80d-eff80129064b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.578 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.593 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.596 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.623 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.898 233728 DEBUG nova.compute.manager [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.899 233728 DEBUG oslo_concurrency.lockutils [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.899 233728 DEBUG oslo_concurrency.lockutils [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.899 233728 DEBUG oslo_concurrency.lockutils [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.900 233728 DEBUG nova.compute.manager [req-5ea53096-0efb-4911-8744-a0a4f53caa61 req-da2881a5-0e99-453f-aba9-b82ef655d6b8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Processing event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.900 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.905 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402621.904985, 2519f959-bfbf-49b6-b80d-eff80129064b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.906 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.909 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.916 233728 INFO nova.virt.libvirt.driver [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Instance spawned successfully.#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.916 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.933 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.943 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:21.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.950 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.950 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.951 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.952 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.953 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.954 233728 DEBUG nova.virt.libvirt.driver [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:21 np0005539552 nova_compute[233724]: 2025-11-29 07:50:21.971 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.017 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb22492-7c7f-40ee-b5ff-a5155977f846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.018 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d0d21b-e1 in ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.020 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d0d21b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.020 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3f417035-2b91-4814-8990-ed1454bac023]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.023 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaee784-82f3-4ea1-aac9-c226a529e261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.023 233728 INFO nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Took 47.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.024 233728 DEBUG nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.051 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[7b22d78d-a05e-4d89-9c48-7da3abccfec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.072 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bb56397e-8b48-4474-b00f-6c7dfd0e6d00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.074 143400 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpbzz2mnwf/privsep.sock']#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.200 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.286 233728 INFO nova.compute.manager [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Took 49.26 seconds to build instance.#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.324 233728 DEBUG oslo_concurrency.lockutils [None req-1bb27776-d6bc-4444-8c6c-f6954d0b2cd4 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 49.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.436 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.436 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.473 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.570 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.570 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.580 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.581 233728 INFO nova.compute.claims [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.704 143400 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.705 143400 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpbzz2mnwf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.581 239674 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.590 239674 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.594 239674 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.595 239674 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239674#033[00m
Nov 29 02:50:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:22.707 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[83360440-3356-451e-a5ac-4ab052b72332]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539552 nova_compute[233724]: 2025-11-29 07:50:22.779 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.178 239674 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.178 239674 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.178 239674 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:23.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2447435816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.249 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.255 233728 DEBUG nova.compute.provider_tree [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.292 233728 DEBUG nova.scheduler.client.report [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.310 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.311 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.357 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.358 233728 DEBUG nova.network.neutron [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.374 233728 INFO nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.410 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.523 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.526 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.528 233728 INFO nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Creating image(s)#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.556 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.583 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.621 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.627 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.692 233728 DEBUG nova.policy [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd57d713485e84d19a429533b570c4189', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06acd02df57149e795d2be57787bb9ed', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.713 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.714 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.715 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.716 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.743 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:23 np0005539552 nova_compute[233724]: 2025-11-29 07:50:23.748 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.780 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0ddc5fd8-c3e0-4ea5-8a33-1c248c26848c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 NetworkManager[48926]: <info>  [1764402623.8029] manager: (tap01d0d21b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.803 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b314fbce-b13c-4e16-8185-47a07ed0f513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 systemd-udevd[239799]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.837 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e658496c-6681-4aeb-a2a6-a461449a6de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.841 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c8363d62-079e-47b3-9733-834546a90ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 NetworkManager[48926]: <info>  [1764402623.8664] device (tap01d0d21b-e0): carrier: link connected
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.873 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[85117877-ec20-42a7-9250-5ab9d1481e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.889 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[64f3a7cf-35b3-4ff7-8071-058fac40f0f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d0d21b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:94:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566955, 'reachable_time': 27014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239817, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.903 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[07364591-4dfc-47b6-b2eb-52fa9a7fe49e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:9497'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566955, 'tstamp': 566955}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239818, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.918 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[852f62e4-abd0-49ac-af25-833e9b6fdf94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d0d21b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:94:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566955, 'reachable_time': 27014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239819, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:23.952 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8406b2-4aa7-40b0-8413-838d611b3069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.005 233728 DEBUG nova.compute.manager [req-6f992f45-053d-465a-8bad-cb4885046822 req-5bf805f3-cdfb-4dad-9054-f7a6b9c0999a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.006 233728 DEBUG oslo_concurrency.lockutils [req-6f992f45-053d-465a-8bad-cb4885046822 req-5bf805f3-cdfb-4dad-9054-f7a6b9c0999a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.007 233728 DEBUG oslo_concurrency.lockutils [req-6f992f45-053d-465a-8bad-cb4885046822 req-5bf805f3-cdfb-4dad-9054-f7a6b9c0999a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.007 233728 DEBUG oslo_concurrency.lockutils [req-6f992f45-053d-465a-8bad-cb4885046822 req-5bf805f3-cdfb-4dad-9054-f7a6b9c0999a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.007 233728 DEBUG nova.compute.manager [req-6f992f45-053d-465a-8bad-cb4885046822 req-5bf805f3-cdfb-4dad-9054-f7a6b9c0999a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.007 233728 WARNING nova.compute.manager [req-6f992f45-053d-465a-8bad-cb4885046822 req-5bf805f3-cdfb-4dad-9054-f7a6b9c0999a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received unexpected event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with vm_state active and task_state None.#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.022 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2e636eed-e139-48a6-acc0-dd79eb0f740f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.023 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d0d21b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.024 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.024 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d0d21b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:24 np0005539552 kernel: tap01d0d21b-e0: entered promiscuous mode
Nov 29 02:50:24 np0005539552 NetworkManager[48926]: <info>  [1764402624.0302] manager: (tap01d0d21b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.033 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d0d21b-e0, col_values=(('external_ids', {'iface-id': 'c5a666a5-4b3e-4d4d-821a-ea0f64e84c84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.039 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:24Z|00031|binding|INFO|Releasing lport c5a666a5-4b3e-4d4d-821a-ea0f64e84c84 from this chassis (sb_readonly=0)
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.042 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.043 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4620afb6-e633-46d3-8261-47de864934c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.044 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-01d0d21b-eaad-4f5d-82d1-0f4d31e80363
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.pid.haproxy
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 01d0d21b-eaad-4f5d-82d1-0f4d31e80363
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:50:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:24.046 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'env', 'PROCESS_TAG=haproxy-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d0d21b-eaad-4f5d-82d1-0f4d31e80363.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.055 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.064 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.440 233728 DEBUG nova.network.neutron [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Successfully created port: 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:50:24 np0005539552 podman[239905]: 2025-11-29 07:50:24.482408945 +0000 UTC m=+0.061343453 container create 4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 02:50:24 np0005539552 systemd[1]: Started libpod-conmon-4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941.scope.
Nov 29 02:50:24 np0005539552 podman[239905]: 2025-11-29 07:50:24.451368724 +0000 UTC m=+0.030303272 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:50:24 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:50:24 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0baa2e851f9e1fc94932f804089afefe03e50bef85198f96a9b794acab5b6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:24 np0005539552 podman[239905]: 2025-11-29 07:50:24.569874666 +0000 UTC m=+0.148809184 container init 4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 02:50:24 np0005539552 podman[239905]: 2025-11-29 07:50:24.575214831 +0000 UTC m=+0.154149329 container start 4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:50:24 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [NOTICE]   (239926) : New worker (239928) forked
Nov 29 02:50:24 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [NOTICE]   (239926) : Loading success.
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.811 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:24 np0005539552 nova_compute[233724]: 2025-11-29 07:50:24.895 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] resizing rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.008 233728 DEBUG nova.objects.instance [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lazy-loading 'migration_context' on Instance uuid b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.043 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.066 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.070 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.071 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.072 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.094 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.095 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.129 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.130 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.151 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.154 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:25.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.277 233728 DEBUG nova.network.neutron [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Successfully updated port: 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.293 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.293 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquired lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.293 233728 DEBUG nova.network.neutron [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.392 233728 DEBUG nova.compute.manager [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received event network-changed-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.393 233728 DEBUG nova.compute.manager [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Refreshing instance network info cache due to event network-changed-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.393 233728 DEBUG oslo_concurrency.lockutils [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:25 np0005539552 nova_compute[233724]: 2025-11-29 07:50:25.511 233728 DEBUG nova.network.neutron [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:25.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.445 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.558 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.559 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Ensure instance console log exists: /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.559 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.559 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.560 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.678 233728 DEBUG nova.network.neutron [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updating instance_info_cache with network_info: [{"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.729 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Releasing lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.729 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Instance network_info: |[{"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.730 233728 DEBUG oslo_concurrency.lockutils [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.730 233728 DEBUG nova.network.neutron [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Refreshing network info cache for port 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.734 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Start _get_guest_xml network_info=[{"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 1, 'encryption_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.738 233728 WARNING nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.743 233728 DEBUG nova.virt.libvirt.host [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.744 233728 DEBUG nova.virt.libvirt.host [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.748 233728 DEBUG nova.virt.libvirt.host [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.749 233728 DEBUG nova.virt.libvirt.host [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.750 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.751 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:49:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='219206037',id=3,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-1713151525',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.751 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.752 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.752 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.752 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.753 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.753 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.753 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.754 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.754 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.755 233728 DEBUG nova.virt.hardware [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:26 np0005539552 nova_compute[233724]: 2025-11-29 07:50:26.758 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1314722380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:27 np0005539552 nova_compute[233724]: 2025-11-29 07:50:27.206 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:27 np0005539552 nova_compute[233724]: 2025-11-29 07:50:27.215 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:27 np0005539552 nova_compute[233724]: 2025-11-29 07:50:27.217 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:27.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4097849384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:27 np0005539552 nova_compute[233724]: 2025-11-29 07:50:27.699 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:27 np0005539552 nova_compute[233724]: 2025-11-29 07:50:27.726 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:27 np0005539552 nova_compute[233724]: 2025-11-29 07:50:27.729 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:27.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2751900038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.143 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.144 233728 DEBUG nova.virt.libvirt.vif [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:50:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1279273348',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1279273348',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1279273348',id=9,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs4qJju/LAwBpfwNEkpAniOUgsZvC5AJbe2gxJ9IIbsfIqlPQKB9AiaXItMjFGTGLax2vK2q305Wa2bDc3JMsgTFTOdHEZIjpwTy2cnsiyE7ulw1lLo0Ds1kT20t3frlA==',key_name='tempest-keypair-1679719729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06acd02df57149e795d2be57787bb9ed',ramdisk_id='',reservation_id='r-o7bphj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-456097839',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:50:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57d713485e84d19a429533b570c4189',uuid=b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.145 233728 DEBUG nova.network.os_vif_util [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converting VIF {"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.146 233728 DEBUG nova.network.os_vif_util [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.147 233728 DEBUG nova.objects.instance [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lazy-loading 'pci_devices' on Instance uuid b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.167 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <uuid>b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61</uuid>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <name>instance-00000009</name>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1279273348</nova:name>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:50:26</nova:creationTime>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-1713151525">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:ephemeral>1</nova:ephemeral>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:user uuid="d57d713485e84d19a429533b570c4189">tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member</nova:user>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:project uuid="06acd02df57149e795d2be57787bb9ed">tempest-ServersWithSpecificFlavorTestJSON-456097839</nova:project>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <nova:port uuid="9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <entry name="serial">b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61</entry>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <entry name="uuid">b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61</entry>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.eph0">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.config">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b0:0d:c0"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <target dev="tap9df81ac8-a3"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/console.log" append="off"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:50:28 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:50:28 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:50:28 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:50:28 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.173 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Preparing to wait for external event network-vif-plugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.173 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.174 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.174 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.175 233728 DEBUG nova.virt.libvirt.vif [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:50:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1279273348',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1279273348',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1279273348',id=9,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs4qJju/LAwBpfwNEkpAniOUgsZvC5AJbe2gxJ9IIbsfIqlPQKB9AiaXItMjFGTGLax2vK2q305Wa2bDc3JMsgTFTOdHEZIjpwTy2cnsiyE7ulw1lLo0Ds1kT20t3frlA==',key_name='tempest-keypair-1679719729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06acd02df57149e795d2be57787bb9ed',ramdisk_id='',reservation_id='r-o7bphj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-456097839',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:50:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57d713485e84d19a429533b570c4189',uuid=b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.175 233728 DEBUG nova.network.os_vif_util [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converting VIF {"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.176 233728 DEBUG nova.network.os_vif_util [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.177 233728 DEBUG os_vif [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.180 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.180 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.181 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.184 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.184 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9df81ac8-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.185 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9df81ac8-a3, col_values=(('external_ids', {'iface-id': '9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:0d:c0', 'vm-uuid': 'b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.186 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:28 np0005539552 NetworkManager[48926]: <info>  [1764402628.1871] manager: (tap9df81ac8-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.189 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.192 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.193 233728 INFO os_vif [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3')#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.247 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.247 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.248 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.248 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] No VIF found with MAC fa:16:3e:b0:0d:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.248 233728 INFO nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Using config drive#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.273 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.583 233728 DEBUG nova.network.neutron [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updated VIF entry in instance network info cache for port 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.583 233728 DEBUG nova.network.neutron [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updating instance_info_cache with network_info: [{"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:28 np0005539552 nova_compute[233724]: 2025-11-29 07:50:28.597 233728 DEBUG oslo_concurrency.lockutils [req-1290e696-0011-477e-9997-3580689d579e req-9c47f15f-9551-4a38-bcf0-dec4fd2b7cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.066 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:29.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.312 233728 INFO nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Creating config drive at /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/disk.config#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.318 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb44xcj1g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.444 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb44xcj1g" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.474 233728 DEBUG nova.storage.rbd_utils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] rbd image b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.477 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/disk.config b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.636 233728 DEBUG oslo_concurrency.processutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/disk.config b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.637 233728 INFO nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Deleting local config drive /var/lib/nova/instances/b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:29 np0005539552 kernel: tap9df81ac8-a3: entered promiscuous mode
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.690 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:29 np0005539552 NetworkManager[48926]: <info>  [1764402629.6915] manager: (tap9df81ac8-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 02:50:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:29Z|00032|binding|INFO|Claiming lport 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 for this chassis.
Nov 29 02:50:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:29Z|00033|binding|INFO|9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83: Claiming fa:16:3e:b0:0d:c0 10.100.0.9
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.697 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.699 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.707 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:0d:c0 10.100.0.9'], port_security=['fa:16:3e:b0:0d:c0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06acd02df57149e795d2be57787bb9ed', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea30fb5a-0cff-4126-aaa9-22a68d1fc7db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=157c85d0-7492-4bd9-b8bf-525034a08746, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.708 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 in datapath 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 bound to our chassis#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.710 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.721 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[049b6d39-e93b-4520-93e0-f89bbd2667c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.722 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1968b9ed-31 in ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.724 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1968b9ed-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.724 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[74fc2c62-c30f-4b82-a5cd-4035b24718e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.725 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d81a139c-edd2-405c-841b-f56e8112c132]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 systemd-machined[196379]: New machine qemu-2-instance-00000009.
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.745 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[05cffcad-3f79-429b-9eb4-0f0c85ad3876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 systemd[1]: Started Virtual Machine qemu-2-instance-00000009.
Nov 29 02:50:29 np0005539552 systemd-udevd[240304]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.762 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2494ab-88a7-48e0-8102-f05f3632cbfa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:29Z|00034|binding|INFO|Setting lport 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 ovn-installed in OVS
Nov 29 02:50:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:29Z|00035|binding|INFO|Setting lport 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 up in Southbound
Nov 29 02:50:29 np0005539552 nova_compute[233724]: 2025-11-29 07:50:29.769 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:29 np0005539552 NetworkManager[48926]: <info>  [1764402629.7763] device (tap9df81ac8-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:50:29 np0005539552 NetworkManager[48926]: <info>  [1764402629.7777] device (tap9df81ac8-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.795 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[926acd91-0957-4afa-8a65-6e36ea763072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 systemd-udevd[240310]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:29 np0005539552 NetworkManager[48926]: <info>  [1764402629.8032] manager: (tap1968b9ed-30): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.802 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3fd07b-a307-46bb-8bd9-d124f6cbffc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.840 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cb8dca-51b3-4ca3-88b8-8add8e786f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.843 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ba9a3e-b5bd-48dc-9bdd-96953b3321f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 NetworkManager[48926]: <info>  [1764402629.8685] device (tap1968b9ed-30): carrier: link connected
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.874 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[38b870df-940d-4b9a-8768-0aeca6193dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.894 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fa5731-47e0-4809-8595-36de77e1700a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1968b9ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:e5:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567555, 'reachable_time': 44669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240334, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.912 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2db85657-af69-4f2a-b910-c695160358bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:e5cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 567555, 'tstamp': 567555}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240335, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.926 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6822ef6e-0616-4e40-a1bd-96771eda6988]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1968b9ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:e5:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567555, 'reachable_time': 44669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240336, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:29.952 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b672f8-6431-46b7-9c33-28648d2a1491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:29.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.006 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8724b4a3-9c03-4e65-a964-64435933fdb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.007 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1968b9ed-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.008 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.008 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1968b9ed-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:30 np0005539552 kernel: tap1968b9ed-30: entered promiscuous mode
Nov 29 02:50:30 np0005539552 nova_compute[233724]: 2025-11-29 07:50:30.010 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539552 nova_compute[233724]: 2025-11-29 07:50:30.012 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539552 NetworkManager[48926]: <info>  [1764402630.0128] manager: (tap1968b9ed-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.013 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1968b9ed-30, col_values=(('external_ids', {'iface-id': '0a5d766f-ec84-47c0-9590-72934ab05c0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:30 np0005539552 nova_compute[233724]: 2025-11-29 07:50:30.014 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:30Z|00036|binding|INFO|Releasing lport 0a5d766f-ec84-47c0-9590-72934ab05c0c from this chassis (sb_readonly=0)
Nov 29 02:50:30 np0005539552 nova_compute[233724]: 2025-11-29 07:50:30.031 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.032 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.033 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[916d4006-1b7f-4467-af2b-07c01e785798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.033 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.pid.haproxy
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:50:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:30.034 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'env', 'PROCESS_TAG=haproxy-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:50:30 np0005539552 podman[240417]: 2025-11-29 07:50:30.411254232 +0000 UTC m=+0.046474631 container create f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:50:30 np0005539552 systemd[1]: Started libpod-conmon-f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f.scope.
Nov 29 02:50:30 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:50:30 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f345e87154b92b463cb415c57066060e8d772b787b306a2b0b6999d88f8f9488/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:30 np0005539552 podman[240417]: 2025-11-29 07:50:30.383962662 +0000 UTC m=+0.019183091 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:50:30 np0005539552 podman[240417]: 2025-11-29 07:50:30.488442304 +0000 UTC m=+0.123662733 container init f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 02:50:30 np0005539552 podman[240417]: 2025-11-29 07:50:30.494453157 +0000 UTC m=+0.129673566 container start f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:30 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [NOTICE]   (240436) : New worker (240438) forked
Nov 29 02:50:30 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [NOTICE]   (240436) : Loading success.
Nov 29 02:50:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:31.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:31 np0005539552 nova_compute[233724]: 2025-11-29 07:50:31.918 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402631.9179556, b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:31 np0005539552 nova_compute[233724]: 2025-11-29 07:50:31.918 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:31.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.277 233728 DEBUG nova.compute.manager [req-3b44dc76-3f8e-4c61-920a-b328c0bec46e req-3358cac5-e876-4972-8470-ece6b8b6d144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received event network-vif-plugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.278 233728 DEBUG oslo_concurrency.lockutils [req-3b44dc76-3f8e-4c61-920a-b328c0bec46e req-3358cac5-e876-4972-8470-ece6b8b6d144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.278 233728 DEBUG oslo_concurrency.lockutils [req-3b44dc76-3f8e-4c61-920a-b328c0bec46e req-3358cac5-e876-4972-8470-ece6b8b6d144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.278 233728 DEBUG oslo_concurrency.lockutils [req-3b44dc76-3f8e-4c61-920a-b328c0bec46e req-3358cac5-e876-4972-8470-ece6b8b6d144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.279 233728 DEBUG nova.compute.manager [req-3b44dc76-3f8e-4c61-920a-b328c0bec46e req-3358cac5-e876-4972-8470-ece6b8b6d144 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Processing event network-vif-plugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.280 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.290 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.292 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.294 233728 INFO nova.virt.libvirt.driver [-] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Instance spawned successfully.#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.295 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.298 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.326 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.326 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402631.920768, b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.327 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.333 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.333 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.334 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.334 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.335 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.335 233728 DEBUG nova.virt.libvirt.driver [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.360 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.364 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402632.2850587, b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.364 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.680 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.683 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.706 233728 INFO nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Took 9.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.707 233728 DEBUG nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.707 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:32 np0005539552 nova_compute[233724]: 2025-11-29 07:50:32.865 233728 INFO nova.compute.manager [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Took 10.33 seconds to build instance.#033[00m
Nov 29 02:50:33 np0005539552 nova_compute[233724]: 2025-11-29 07:50:33.017 233728 DEBUG oslo_concurrency.lockutils [None req-53b7cd51-8a95-42a2-9fe1-c6f3827b9d06 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:33 np0005539552 nova_compute[233724]: 2025-11-29 07:50:33.186 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:33.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:33.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.068 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.418 233728 DEBUG nova.compute.manager [req-5e93a52f-80ef-447f-ace3-99c40b0a7705 req-9e623c35-dd47-43b9-86a3-16148c57685e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received event network-vif-plugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.418 233728 DEBUG oslo_concurrency.lockutils [req-5e93a52f-80ef-447f-ace3-99c40b0a7705 req-9e623c35-dd47-43b9-86a3-16148c57685e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.419 233728 DEBUG oslo_concurrency.lockutils [req-5e93a52f-80ef-447f-ace3-99c40b0a7705 req-9e623c35-dd47-43b9-86a3-16148c57685e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.419 233728 DEBUG oslo_concurrency.lockutils [req-5e93a52f-80ef-447f-ace3-99c40b0a7705 req-9e623c35-dd47-43b9-86a3-16148c57685e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.419 233728 DEBUG nova.compute.manager [req-5e93a52f-80ef-447f-ace3-99c40b0a7705 req-9e623c35-dd47-43b9-86a3-16148c57685e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] No waiting events found dispatching network-vif-plugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:34 np0005539552 nova_compute[233724]: 2025-11-29 07:50:34.420 233728 WARNING nova.compute.manager [req-5e93a52f-80ef-447f-ace3-99c40b0a7705 req-9e623c35-dd47-43b9-86a3-16148c57685e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received unexpected event network-vif-plugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:50:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:35.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8121] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8126] device (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8135] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8139] device (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8147] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8152] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8165] device (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:50:35 np0005539552 NetworkManager[48926]: <info>  [1764402635.8168] device (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:50:35 np0005539552 nova_compute[233724]: 2025-11-29 07:50:35.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:35.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:36 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:36Z|00037|binding|INFO|Releasing lport c5a666a5-4b3e-4d4d-821a-ea0f64e84c84 from this chassis (sb_readonly=0)
Nov 29 02:50:36 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:36Z|00038|binding|INFO|Releasing lport 0a5d766f-ec84-47c0-9590-72934ab05c0c from this chassis (sb_readonly=0)
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.063 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.620 233728 DEBUG nova.compute.manager [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received event network-changed-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.620 233728 DEBUG nova.compute.manager [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Refreshing instance network info cache due to event network-changed-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.620 233728 DEBUG oslo_concurrency.lockutils [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.620 233728 DEBUG oslo_concurrency.lockutils [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:36 np0005539552 nova_compute[233724]: 2025-11-29 07:50:36.621 233728 DEBUG nova.network.neutron [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Refreshing network info cache for port 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:36 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:36Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:42:7a 10.1.0.82
Nov 29 02:50:36 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:36Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:42:7a 10.1.0.82
Nov 29 02:50:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:37.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.187 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.653 233728 DEBUG nova.network.neutron [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updated VIF entry in instance network info cache for port 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.653 233728 DEBUG nova.network.neutron [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updating instance_info_cache with network_info: [{"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.675 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.676 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.676 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.676 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.676 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.677 233728 INFO nova.compute.manager [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Terminating instance#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.678 233728 DEBUG nova.compute.manager [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.679 233728 DEBUG oslo_concurrency.lockutils [req-073d6d73-b58b-497b-bd94-6e7e1ececc3f req-81079820-cb2f-48e5-ac65-bd777a36f879 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:38 np0005539552 kernel: tapfc147d63-f6 (unregistering): left promiscuous mode
Nov 29 02:50:38 np0005539552 NetworkManager[48926]: <info>  [1764402638.8229] device (tapfc147d63-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.822 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00039|binding|INFO|Releasing lport fc147d63-f6e7-4928-8991-4204b1b015bc from this chassis (sb_readonly=0)
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00040|binding|INFO|Setting lport fc147d63-f6e7-4928-8991-4204b1b015bc down in Southbound
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00041|binding|INFO|Removing iface tapfc147d63-f6 ovn-installed in OVS
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.827 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.837 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], port_security=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.82/26 fdfe:381f:8400:1::357/64', 'neutron:device_id': '2519f959-bfbf-49b6-b80d-eff80129064b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=fc147d63-f6e7-4928-8991-4204b1b015bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.838 143400 INFO neutron.agent.ovn.metadata.agent [-] Port fc147d63-f6e7-4928-8991-4204b1b015bc in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 unbound from our chassis#033[00m
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.840 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.840 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[442e23f8-2562-422e-9f7d-11efe2b58ef3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.842 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 namespace which is not needed anymore#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.855 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 29 02:50:38 np0005539552 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000004.scope: Consumed 14.784s CPU time.
Nov 29 02:50:38 np0005539552 systemd-machined[196379]: Machine qemu-1-instance-00000004 terminated.
Nov 29 02:50:38 np0005539552 kernel: tapfc147d63-f6: entered promiscuous mode
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00042|binding|INFO|Claiming lport fc147d63-f6e7-4928-8991-4204b1b015bc for this chassis.
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00043|binding|INFO|fc147d63-f6e7-4928-8991-4204b1b015bc: Claiming fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.897 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 NetworkManager[48926]: <info>  [1764402638.8987] manager: (tapfc147d63-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.905 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], port_security=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.82/26 fdfe:381f:8400:1::357/64', 'neutron:device_id': '2519f959-bfbf-49b6-b80d-eff80129064b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=fc147d63-f6e7-4928-8991-4204b1b015bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:38 np0005539552 kernel: tapfc147d63-f6 (unregistering): left promiscuous mode
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00044|binding|INFO|Setting lport fc147d63-f6e7-4928-8991-4204b1b015bc ovn-installed in OVS
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.919 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00045|binding|INFO|Setting lport fc147d63-f6e7-4928-8991-4204b1b015bc up in Southbound
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00046|binding|INFO|Releasing lport fc147d63-f6e7-4928-8991-4204b1b015bc from this chassis (sb_readonly=1)
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00047|if_status|INFO|Not setting lport fc147d63-f6e7-4928-8991-4204b1b015bc down as sb is readonly
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.920 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00048|binding|INFO|Removing iface tapfc147d63-f6 ovn-installed in OVS
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00049|binding|INFO|Releasing lport fc147d63-f6e7-4928-8991-4204b1b015bc from this chassis (sb_readonly=0)
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:38Z|00050|binding|INFO|Setting lport fc147d63-f6e7-4928-8991-4204b1b015bc down in Southbound
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.931 233728 INFO nova.virt.libvirt.driver [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Instance destroyed successfully.#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.931 233728 DEBUG nova.objects.instance [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'resources' on Instance uuid 2519f959-bfbf-49b6-b80d-eff80129064b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:50:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3684881614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:50:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:50:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3684881614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:38.965 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], port_security=['fa:16:3e:ad:42:7a 10.1.0.82 fdfe:381f:8400:1::357'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.82/26 fdfe:381f:8400:1::357/64', 'neutron:device_id': '2519f959-bfbf-49b6-b80d-eff80129064b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3c7cd563ba394223a76bd2579800406c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7540713-07cb-41c9-9bad-f36175f21356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d42f5ec8-3ffe-4c03-bf87-380969e1ba25, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=fc147d63-f6e7-4928-8991-4204b1b015bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.974 233728 DEBUG nova.virt.libvirt.vif [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-497379200-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-497379200-2',id=4,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T07:50:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3c7cd563ba394223a76bd2579800406c',ramdisk_id='',reservation_id='r-k1o602if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1372302389',owner_user_name='tempest-AutoAllocateNetworkTest-1372302389-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:22Z,user_data=None,user_id='01739124bee74c899af6384f8ec2d427',uuid=2519f959-bfbf-49b6-b80d-eff80129064b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.975 233728 DEBUG nova.network.os_vif_util [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converting VIF {"id": "fc147d63-f6e7-4928-8991-4204b1b015bc", "address": "fa:16:3e:ad:42:7a", "network": {"id": "01d0d21b-eaad-4f5d-82d1-0f4d31e80363", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::357", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3c7cd563ba394223a76bd2579800406c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc147d63-f6", "ovs_interfaceid": "fc147d63-f6e7-4928-8991-4204b1b015bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.975 233728 DEBUG nova.network.os_vif_util [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.976 233728 DEBUG os_vif [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.978 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.979 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc147d63-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.980 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:38 np0005539552 nova_compute[233724]: 2025-11-29 07:50:38.983 233728 INFO os_vif [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:42:7a,bridge_name='br-int',has_traffic_filtering=True,id=fc147d63-f6e7-4928-8991-4204b1b015bc,network=Network(01d0d21b-eaad-4f5d-82d1-0f4d31e80363),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc147d63-f6')#033[00m
Nov 29 02:50:38 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [NOTICE]   (239926) : haproxy version is 2.8.14-c23fe91
Nov 29 02:50:38 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [NOTICE]   (239926) : path to executable is /usr/sbin/haproxy
Nov 29 02:50:38 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [WARNING]  (239926) : Exiting Master process...
Nov 29 02:50:38 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [ALERT]    (239926) : Current worker (239928) exited with code 143 (Terminated)
Nov 29 02:50:38 np0005539552 neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363[239921]: [WARNING]  (239926) : All workers exited. Exiting... (0)
Nov 29 02:50:38 np0005539552 systemd[1]: libpod-4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941.scope: Deactivated successfully.
Nov 29 02:50:38 np0005539552 podman[240542]: 2025-11-29 07:50:38.998614575 +0000 UTC m=+0.054890389 container died 4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:50:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941-userdata-shm.mount: Deactivated successfully.
Nov 29 02:50:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay-7a0baa2e851f9e1fc94932f804089afefe03e50bef85198f96a9b794acab5b6f-merged.mount: Deactivated successfully.
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.053 233728 DEBUG nova.compute.manager [req-d90af26c-de6e-4072-a0f8-91c83a2b3caf req-e7ea6d4a-dbd9-4787-b5a9-d988597a77f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-unplugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.054 233728 DEBUG oslo_concurrency.lockutils [req-d90af26c-de6e-4072-a0f8-91c83a2b3caf req-e7ea6d4a-dbd9-4787-b5a9-d988597a77f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.054 233728 DEBUG oslo_concurrency.lockutils [req-d90af26c-de6e-4072-a0f8-91c83a2b3caf req-e7ea6d4a-dbd9-4787-b5a9-d988597a77f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.054 233728 DEBUG oslo_concurrency.lockutils [req-d90af26c-de6e-4072-a0f8-91c83a2b3caf req-e7ea6d4a-dbd9-4787-b5a9-d988597a77f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.055 233728 DEBUG nova.compute.manager [req-d90af26c-de6e-4072-a0f8-91c83a2b3caf req-e7ea6d4a-dbd9-4787-b5a9-d988597a77f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-unplugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.055 233728 DEBUG nova.compute.manager [req-d90af26c-de6e-4072-a0f8-91c83a2b3caf req-e7ea6d4a-dbd9-4787-b5a9-d988597a77f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-unplugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.070 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:39 np0005539552 podman[240542]: 2025-11-29 07:50:39.078443778 +0000 UTC m=+0.134719582 container cleanup 4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:50:39 np0005539552 systemd[1]: libpod-conmon-4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941.scope: Deactivated successfully.
Nov 29 02:50:39 np0005539552 podman[240590]: 2025-11-29 07:50:39.167109361 +0000 UTC m=+0.068451146 container remove 4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.178 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[57c7807f-8f11-47fa-ad5c-7c004c55bc54]: (4, ('Sat Nov 29 07:50:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 (4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941)\n4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941\nSat Nov 29 07:50:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 (4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941)\n4ad72792ffd21475cf06f0b0b805e3c116879260c91edca97d4de66074330941\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.180 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[50da8dee-6f65-452b-960f-617c166f5105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.181 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d0d21b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.182 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:39 np0005539552 kernel: tap01d0d21b-e0: left promiscuous mode
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.199 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.199 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[42a58b16-ede6-4ecd-a3b3-eebee4e5a982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.218 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[48975af9-2afd-4aff-a2f2-8ce3295b91be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.219 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3b209a-66e8-4649-97cb-d601f11fd9ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.236 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bede45f4-8885-4e3d-9a8c-f24062b94748]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566946, 'reachable_time': 31590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240605, 'error': None, 'target': 'ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:39 np0005539552 systemd[1]: run-netns-ovnmeta\x2d01d0d21b\x2deaad\x2d4f5d\x2d82d1\x2d0f4d31e80363.mount: Deactivated successfully.
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.247 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d0d21b-eaad-4f5d-82d1-0f4d31e80363 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.247 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b997b0ac-e979-47b9-acf4-b4045c08d7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.248 143400 INFO neutron.agent.ovn.metadata.agent [-] Port fc147d63-f6e7-4928-8991-4204b1b015bc in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 unbound from our chassis#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.250 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.251 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ce636c48-c40e-4bf5-a1c4-181fc90ffb67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.252 143400 INFO neutron.agent.ovn.metadata.agent [-] Port fc147d63-f6e7-4928-8991-4204b1b015bc in datapath 01d0d21b-eaad-4f5d-82d1-0f4d31e80363 unbound from our chassis#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.253 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d0d21b-eaad-4f5d-82d1-0f4d31e80363, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:50:39.253 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e863a7-018f-4e10-9454-17fd19088810]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.611 233728 INFO nova.virt.libvirt.driver [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Deleting instance files /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b_del#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.613 233728 INFO nova.virt.libvirt.driver [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Deletion of /var/lib/nova/instances/2519f959-bfbf-49b6-b80d-eff80129064b_del complete#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.707 233728 DEBUG nova.virt.libvirt.host [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.708 233728 INFO nova.virt.libvirt.host [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] UEFI support detected#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.711 233728 INFO nova.compute.manager [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.712 233728 DEBUG oslo.service.loopingcall [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.713 233728 DEBUG nova.compute.manager [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:39 np0005539552 nova_compute[233724]: 2025-11-29 07:50:39.713 233728 DEBUG nova.network.neutron [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:40 np0005539552 nova_compute[233724]: 2025-11-29 07:50:40.820 233728 DEBUG nova.network.neutron [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:40 np0005539552 nova_compute[233724]: 2025-11-29 07:50:40.852 233728 INFO nova.compute.manager [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:40 np0005539552 nova_compute[233724]: 2025-11-29 07:50:40.912 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:40 np0005539552 nova_compute[233724]: 2025-11-29 07:50:40.912 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:40 np0005539552 nova_compute[233724]: 2025-11-29 07:50:40.934 233728 DEBUG nova.compute.manager [req-b941c4da-ec56-4cb1-8c79-12a104a9a17e req-845b5192-ec68-4137-82d1-63b293465590 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-deleted-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:40 np0005539552 nova_compute[233724]: 2025-11-29 07:50:40.966 233728 DEBUG oslo_concurrency.processutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.126 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.126 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.127 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.127 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.127 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.127 233728 WARNING nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received unexpected event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.128 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.128 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.128 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.128 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.129 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.129 233728 WARNING nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received unexpected event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.129 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.129 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.130 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.130 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.130 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.130 233728 WARNING nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received unexpected event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.130 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-unplugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.131 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.131 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.131 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.131 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-unplugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.131 233728 WARNING nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received unexpected event network-vif-unplugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.132 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.132 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.132 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.132 233728 DEBUG oslo_concurrency.lockutils [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.133 233728 DEBUG nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] No waiting events found dispatching network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.133 233728 WARNING nova.compute.manager [req-839c8b54-e5df-46b2-b778-323655b67408 req-5da5fd6b-8609-4909-a0f9-c7fd113cd8d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Received unexpected event network-vif-plugged-fc147d63-f6e7-4928-8991-4204b1b015bc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:50:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:41.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2695139829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.392 233728 DEBUG oslo_concurrency.processutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.398 233728 DEBUG nova.compute.provider_tree [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.410 233728 DEBUG nova.scheduler.client.report [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.485 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.510 233728 INFO nova.scheduler.client.report [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Deleted allocations for instance 2519f959-bfbf-49b6-b80d-eff80129064b#033[00m
Nov 29 02:50:41 np0005539552 nova_compute[233724]: 2025-11-29 07:50:41.622 233728 DEBUG oslo_concurrency.lockutils [None req-38def009-aad2-4916-9e71-d331ba0a4331 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "2519f959-bfbf-49b6-b80d-eff80129064b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:41.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:43.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:43 np0005539552 nova_compute[233724]: 2025-11-29 07:50:43.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:44 np0005539552 nova_compute[233724]: 2025-11-29 07:50:44.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:45.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:47.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:48 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:48Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:0d:c0 10.100.0.9
Nov 29 02:50:48 np0005539552 ovn_controller[133798]: 2025-11-29T07:50:48Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:0d:c0 10.100.0.9
Nov 29 02:50:48 np0005539552 nova_compute[233724]: 2025-11-29 07:50:48.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:49 np0005539552 nova_compute[233724]: 2025-11-29 07:50:49.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:49.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:49 np0005539552 podman[240688]: 2025-11-29 07:50:49.971603989 +0000 UTC m=+0.051019604 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:50:49 np0005539552 podman[240687]: 2025-11-29 07:50:49.979078781 +0000 UTC m=+0.060270724 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 02:50:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:51.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:51 np0005539552 podman[240728]: 2025-11-29 07:50:51.992427673 +0000 UTC m=+0.083246068 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:50:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:53.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:53 np0005539552 nova_compute[233724]: 2025-11-29 07:50:53.928 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402638.9269476, 2519f959-bfbf-49b6-b80d-eff80129064b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:53 np0005539552 nova_compute[233724]: 2025-11-29 07:50:53.928 233728 INFO nova.compute.manager [-] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:50:53 np0005539552 nova_compute[233724]: 2025-11-29 07:50:53.958 233728 DEBUG nova.compute.manager [None req-40821ca7-b89b-4eda-9cbc-bc7f11d7008a - - - - - -] [instance: 2519f959-bfbf-49b6-b80d-eff80129064b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:53 np0005539552 nova_compute[233724]: 2025-11-29 07:50:53.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:53.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:54 np0005539552 nova_compute[233724]: 2025-11-29 07:50:54.076 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:55.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:57.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:57.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:58 np0005539552 nova_compute[233724]: 2025-11-29 07:50:58.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:59 np0005539552 nova_compute[233724]: 2025-11-29 07:50:59.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:50:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:59.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:03 np0005539552 nova_compute[233724]: 2025-11-29 07:51:03.990 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:04.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:04 np0005539552 nova_compute[233724]: 2025-11-29 07:51:04.080 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:04 np0005539552 nova_compute[233724]: 2025-11-29 07:51:04.897 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:04 np0005539552 nova_compute[233724]: 2025-11-29 07:51:04.898 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:04 np0005539552 nova_compute[233724]: 2025-11-29 07:51:04.898 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:51:04 np0005539552 nova_compute[233724]: 2025-11-29 07:51:04.899 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:51:05 np0005539552 nova_compute[233724]: 2025-11-29 07:51:05.210 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:05.210 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:05.212 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:51:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:05 np0005539552 ovn_controller[133798]: 2025-11-29T07:51:05Z|00051|binding|INFO|Releasing lport 0a5d766f-ec84-47c0-9590-72934ab05c0c from this chassis (sb_readonly=0)
Nov 29 02:51:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:05 np0005539552 nova_compute[233724]: 2025-11-29 07:51:05.705 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:51:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2230048315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:51:05 np0005539552 nova_compute[233724]: 2025-11-29 07:51:05.782 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:05 np0005539552 nova_compute[233724]: 2025-11-29 07:51:05.783 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:05 np0005539552 nova_compute[233724]: 2025-11-29 07:51:05.783 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:51:05 np0005539552 nova_compute[233724]: 2025-11-29 07:51:05.784 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:06.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:08.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:09 np0005539552 nova_compute[233724]: 2025-11-29 07:51:09.046 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:09 np0005539552 nova_compute[233724]: 2025-11-29 07:51:09.083 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:09.213 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:09.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:10.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:11.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:12.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:13.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:14.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:14 np0005539552 nova_compute[233724]: 2025-11-29 07:51:14.048 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:14 np0005539552 nova_compute[233724]: 2025-11-29 07:51:14.085 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:15 np0005539552 nova_compute[233724]: 2025-11-29 07:51:15.810 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updating instance_info_cache with network_info: [{"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.697 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.698 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.698 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.698 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.699 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.700 233728 DEBUG oslo_concurrency.lockutils [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.700 233728 DEBUG oslo_concurrency.lockutils [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.701 233728 DEBUG oslo_concurrency.lockutils [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.701 233728 DEBUG oslo_concurrency.lockutils [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.701 233728 DEBUG oslo_concurrency.lockutils [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.702 233728 INFO nova.compute.manager [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Terminating instance#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.703 233728 DEBUG nova.compute.manager [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.703 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.704 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.704 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.705 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.705 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.757 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.757 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.757 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.758 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:51:16 np0005539552 nova_compute[233724]: 2025-11-29 07:51:16.758 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:17 np0005539552 kernel: tap9df81ac8-a3 (unregistering): left promiscuous mode
Nov 29 02:51:17 np0005539552 NetworkManager[48926]: <info>  [1764402677.1168] device (tap9df81ac8-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:51:17 np0005539552 ovn_controller[133798]: 2025-11-29T07:51:17Z|00052|binding|INFO|Releasing lport 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 from this chassis (sb_readonly=0)
Nov 29 02:51:17 np0005539552 ovn_controller[133798]: 2025-11-29T07:51:17Z|00053|binding|INFO|Setting lport 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 down in Southbound
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.129 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 ovn_controller[133798]: 2025-11-29T07:51:17Z|00054|binding|INFO|Removing iface tap9df81ac8-a3 ovn-installed in OVS
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.136 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.160 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:0d:c0 10.100.0.9'], port_security=['fa:16:3e:b0:0d:c0 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06acd02df57149e795d2be57787bb9ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea30fb5a-0cff-4126-aaa9-22a68d1fc7db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=157c85d0-7492-4bd9-b8bf-525034a08746, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.161 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 in datapath 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 unbound from our chassis#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.163 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.164 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[888ed8fd-cbd0-4653-a1a5-8753b4a9087b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.166 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 namespace which is not needed anymore#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.176 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 02:51:17 np0005539552 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000009.scope: Consumed 16.714s CPU time.
Nov 29 02:51:17 np0005539552 systemd-machined[196379]: Machine qemu-2-instance-00000009 terminated.
Nov 29 02:51:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3480463530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.217 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:17.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:17 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [NOTICE]   (240436) : haproxy version is 2.8.14-c23fe91
Nov 29 02:51:17 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [NOTICE]   (240436) : path to executable is /usr/sbin/haproxy
Nov 29 02:51:17 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [WARNING]  (240436) : Exiting Master process...
Nov 29 02:51:17 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [ALERT]    (240436) : Current worker (240438) exited with code 143 (Terminated)
Nov 29 02:51:17 np0005539552 neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4[240432]: [WARNING]  (240436) : All workers exited. Exiting... (0)
Nov 29 02:51:17 np0005539552 systemd[1]: libpod-f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f.scope: Deactivated successfully.
Nov 29 02:51:17 np0005539552 podman[240866]: 2025-11-29 07:51:17.326403432 +0000 UTC m=+0.058663690 container died f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.341 233728 INFO nova.virt.libvirt.driver [-] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Instance destroyed successfully.#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.342 233728 DEBUG nova.objects.instance [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Lazy-loading 'resources' on Instance uuid b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.371 233728 DEBUG nova.virt.libvirt.vif [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:50:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1279273348',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1279273348',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1279273348',id=9,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs4qJju/LAwBpfwNEkpAniOUgsZvC5AJbe2gxJ9IIbsfIqlPQKB9AiaXItMjFGTGLax2vK2q305Wa2bDc3JMsgTFTOdHEZIjpwTy2cnsiyE7ulw1lLo0Ds1kT20t3frlA==',key_name='tempest-keypair-1679719729',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06acd02df57149e795d2be57787bb9ed',ramdisk_id='',reservation_id='r-o7bphj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-456097839',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-456097839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d57d713485e84d19a429533b570c4189',uuid=b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.371 233728 DEBUG nova.network.os_vif_util [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converting VIF {"id": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "address": "fa:16:3e:b0:0d:c0", "network": {"id": "1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2058609322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06acd02df57149e795d2be57787bb9ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df81ac8-a3", "ovs_interfaceid": "9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.372 233728 DEBUG nova.network.os_vif_util [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.372 233728 DEBUG os_vif [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.374 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.375 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9df81ac8-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:17 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f-userdata-shm.mount: Deactivated successfully.
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.376 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.378 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f345e87154b92b463cb415c57066060e8d772b787b306a2b0b6999d88f8f9488-merged.mount: Deactivated successfully.
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.380 233728 INFO os_vif [None req-ed9317e9-a8a8-4193-be1a-f0fdfd8a96a3 d57d713485e84d19a429533b570c4189 06acd02df57149e795d2be57787bb9ed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:0d:c0,bridge_name='br-int',has_traffic_filtering=True,id=9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83,network=Network(1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df81ac8-a3')#033[00m
Nov 29 02:51:17 np0005539552 podman[240866]: 2025-11-29 07:51:17.389237525 +0000 UTC m=+0.121497763 container cleanup f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:51:17 np0005539552 systemd[1]: libpod-conmon-f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f.scope: Deactivated successfully.
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.442 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.442 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.443 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:17 np0005539552 podman[240921]: 2025-11-29 07:51:17.48941698 +0000 UTC m=+0.079971828 container remove f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.495 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a68b03cc-f332-4574-83a9-1506b503c373]: (4, ('Sat Nov 29 07:51:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 (f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f)\nf45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f\nSat Nov 29 07:51:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 (f45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f)\nf45162a592f09b21e4071b5f92612fd8366e7e70e497730ee33caf4ba4286a5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.496 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d67e9b9-5e85-4651-9ef8-1cd2e6bef045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.497 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1968b9ed-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 kernel: tap1968b9ed-30: left promiscuous mode
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.502 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e43f65c0-5501-45a5-8101-b8dbc8b81b55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.513 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.516 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9831b5-6d73-42aa-8f25-4389f96bcb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.517 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[96304d70-cf8d-4be0-adc1-ee285faaa943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.531 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[889d1dea-3050-4b7c-aa1a-3c194914aade]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567547, 'reachable_time': 28715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240938, 'error': None, 'target': 'ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 systemd[1]: run-netns-ovnmeta\x2d1968b9ed\x2d3ce7\x2d4c25\x2d8c75\x2d7925f3a8c0b4.mount: Deactivated successfully.
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.535 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1968b9ed-3ce7-4c25-8c75-7925f3a8c0b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:51:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:51:17.536 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[27c57da7-96b9-4f04-8881-7ea3f7e84a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.607 233728 DEBUG nova.compute.manager [req-5bc1e112-5e45-4f2e-9e7a-4ad3179aa9fa req-dc20015d-7421-4b5d-af31-5e29b9c83ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received event network-vif-unplugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.607 233728 DEBUG oslo_concurrency.lockutils [req-5bc1e112-5e45-4f2e-9e7a-4ad3179aa9fa req-dc20015d-7421-4b5d-af31-5e29b9c83ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.607 233728 DEBUG oslo_concurrency.lockutils [req-5bc1e112-5e45-4f2e-9e7a-4ad3179aa9fa req-dc20015d-7421-4b5d-af31-5e29b9c83ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.608 233728 DEBUG oslo_concurrency.lockutils [req-5bc1e112-5e45-4f2e-9e7a-4ad3179aa9fa req-dc20015d-7421-4b5d-af31-5e29b9c83ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.608 233728 DEBUG nova.compute.manager [req-5bc1e112-5e45-4f2e-9e7a-4ad3179aa9fa req-dc20015d-7421-4b5d-af31-5e29b9c83ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] No waiting events found dispatching network-vif-unplugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.608 233728 DEBUG nova.compute.manager [req-5bc1e112-5e45-4f2e-9e7a-4ad3179aa9fa req-dc20015d-7421-4b5d-af31-5e29b9c83ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61] Received event network-vif-unplugged-9df81ac8-a3e4-4c6e-b6df-182c0f3bdb83 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.619 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.620 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4917MB free_disk=20.85161590576172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.621 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.621 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.755 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance b4ecadf7-e83c-439e-9c2a-bb6bfb5c0a61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.756 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.756 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:51:17 np0005539552 nova_compute[233724]: 2025-11-29 07:51:17.834 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:51:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/824095805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:18 np0005539552 nova_compute[233724]: 2025-11-29 07:51:18.348 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:06 np0005539552 rsyslogd[1004]: imjournal: 385 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 02:52:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:07 np0005539552 nova_compute[233724]: 2025-11-29 07:52:07.404 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:08.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:09 np0005539552 nova_compute[233724]: 2025-11-29 07:52:09.108 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:09.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:10.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:11.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:12 np0005539552 nova_compute[233724]: 2025-11-29 07:52:12.406 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:52:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:12.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:52:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:13.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:14 np0005539552 nova_compute[233724]: 2025-11-29 07:52:14.110 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:14.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:15.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:17 np0005539552 nova_compute[233724]: 2025-11-29 07:52:17.409 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.427 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.428 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.451 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.451 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.451 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:52:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:18.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.474 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.474 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.475 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.475 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.476 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.476 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.476 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.477 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.477 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.508 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.508 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.509 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.509 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.510 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4091241449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:18 np0005539552 nova_compute[233724]: 2025-11-29 07:52:18.955 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.112 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.126 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.127 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4962MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.127 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.128 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.201 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.201 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.216 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:19.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2873233482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.622 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.628 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.646 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.667 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:52:19 np0005539552 nova_compute[233724]: 2025-11-29 07:52:19.667 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:20.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:52:20.601 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:52:20.601 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:52:20.601 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:22 np0005539552 nova_compute[233724]: 2025-11-29 07:52:22.413 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:22.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:22 np0005539552 podman[241530]: 2025-11-29 07:52:22.963915161 +0000 UTC m=+0.052902540 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:22 np0005539552 podman[241529]: 2025-11-29 07:52:22.98517833 +0000 UTC m=+0.071114216 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:52:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:24 np0005539552 nova_compute[233724]: 2025-11-29 07:52:24.115 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:24.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:25 np0005539552 podman[241569]: 2025-11-29 07:52:25.019962016 +0000 UTC m=+0.107710172 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:52:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:27.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:27 np0005539552 nova_compute[233724]: 2025-11-29 07:52:27.416 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:52:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:52:29 np0005539552 nova_compute[233724]: 2025-11-29 07:52:29.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:29 np0005539552 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 02:52:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:32 np0005539552 nova_compute[233724]: 2025-11-29 07:52:32.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:33.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:34 np0005539552 nova_compute[233724]: 2025-11-29 07:52:34.119 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:34.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:35.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:37 np0005539552 nova_compute[233724]: 2025-11-29 07:52:37.420 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:52:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197033004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:52:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:52:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197033004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:52:39 np0005539552 nova_compute[233724]: 2025-11-29 07:52:39.121 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:52:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:52:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:40.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:41.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:42 np0005539552 nova_compute[233724]: 2025-11-29 07:52:42.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:42.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:43.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:44 np0005539552 nova_compute[233724]: 2025-11-29 07:52:44.165 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:44.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:45.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:46 np0005539552 ovn_controller[133798]: 2025-11-29T07:52:46Z|00055|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:52:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:46.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:47.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:47 np0005539552 nova_compute[233724]: 2025-11-29 07:52:47.472 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:48.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:49 np0005539552 nova_compute[233724]: 2025-11-29 07:52:49.166 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:49.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:51.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:52 np0005539552 nova_compute[233724]: 2025-11-29 07:52:52.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:53 np0005539552 podman[241890]: 2025-11-29 07:52:53.974519534 +0000 UTC m=+0.061785371 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:52:53 np0005539552 podman[241891]: 2025-11-29 07:52:53.975724877 +0000 UTC m=+0.058350728 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:52:54 np0005539552 nova_compute[233724]: 2025-11-29 07:52:54.167 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:54.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:55.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:55 np0005539552 podman[241930]: 2025-11-29 07:52:55.98286346 +0000 UTC m=+0.075669929 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:52:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:57 np0005539552 nova_compute[233724]: 2025-11-29 07:52:57.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:58.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:59 np0005539552 nova_compute[233724]: 2025-11-29 07:52:59.212 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:52:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:02 np0005539552 nova_compute[233724]: 2025-11-29 07:53:02.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:03.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:04 np0005539552 nova_compute[233724]: 2025-11-29 07:53:04.214 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:53:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:04.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:53:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:05.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:06.403 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:06 np0005539552 nova_compute[233724]: 2025-11-29 07:53:06.404 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:06.404 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:53:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:07.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:07 np0005539552 nova_compute[233724]: 2025-11-29 07:53:07.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:08.405 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:09 np0005539552 nova_compute[233724]: 2025-11-29 07:53:09.215 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:09.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:11.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:12 np0005539552 nova_compute[233724]: 2025-11-29 07:53:12.484 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:13.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:14 np0005539552 nova_compute[233724]: 2025-11-29 07:53:14.217 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:14.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:16.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:17.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:17 np0005539552 nova_compute[233724]: 2025-11-29 07:53:17.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:18.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.219 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:19.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3739096484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.669 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.670 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.670 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.670 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.688 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.688 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.688 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.689 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.689 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.689 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.690 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.690 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.690 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.717 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.717 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.717 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.717 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:53:19 np0005539552 nova_compute[233724]: 2025-11-29 07:53:19.718 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4124595404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.150 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.283 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.285 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4980MB free_disk=20.942726135253906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.285 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.285 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.354 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.354 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.385 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:20.601 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:20.602 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:20.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:20.602 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3609572308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.783 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.789 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.804 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.806 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:53:20 np0005539552 nova_compute[233724]: 2025-11-29 07:53:20.807 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:22 np0005539552 nova_compute[233724]: 2025-11-29 07:53:22.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:22.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:23.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:24 np0005539552 nova_compute[233724]: 2025-11-29 07:53:24.222 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:24.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:24 np0005539552 podman[242071]: 2025-11-29 07:53:24.971432328 +0000 UTC m=+0.059483679 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:24 np0005539552 podman[242072]: 2025-11-29 07:53:24.972883337 +0000 UTC m=+0.057366221 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 02:53:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:25.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:26.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:26 np0005539552 podman[242162]: 2025-11-29 07:53:26.991633996 +0000 UTC m=+0.080398708 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:27 np0005539552 nova_compute[233724]: 2025-11-29 07:53:27.492 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:28.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:29 np0005539552 nova_compute[233724]: 2025-11-29 07:53:29.224 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:29.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:30.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.702 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.702 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.724 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.793 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.793 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.800 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.800 233728 INFO nova.compute.claims [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:53:30 np0005539552 nova_compute[233724]: 2025-11-29 07:53:30.904 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1930298900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.343 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.350 233728 DEBUG nova.compute.provider_tree [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.382 233728 DEBUG nova.scheduler.client.report [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.403 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.404 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:53:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:31.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.463 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.464 233728 DEBUG nova.network.neutron [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.491 233728 INFO nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.507 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.604 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.605 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.605 233728 INFO nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Creating image(s)#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.633 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.660 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.686 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.691 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.745 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.747 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.748 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.749 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.773 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.776 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9bef976c-2981-4d19-aa60-8a550b7093ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:31 np0005539552 nova_compute[233724]: 2025-11-29 07:53:31.959 233728 DEBUG nova.policy [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8f5b14bc98a47f29238140d1d3f1220', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f91d373d1ef64146866ef08735a75efa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:53:32 np0005539552 nova_compute[233724]: 2025-11-29 07:53:32.495 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:32.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:33.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.838 233728 DEBUG nova.network.neutron [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Successfully updated port: 384b014a-c4e8-4d83-a8d1-09e70342722f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.854 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.854 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquired lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.854 233728 DEBUG nova.network.neutron [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.930 233728 DEBUG nova.compute.manager [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-changed-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.930 233728 DEBUG nova.compute.manager [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Refreshing instance network info cache due to event network-changed-384b014a-c4e8-4d83-a8d1-09e70342722f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:33 np0005539552 nova_compute[233724]: 2025-11-29 07:53:33.930 233728 DEBUG oslo_concurrency.lockutils [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.007 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9bef976c-2981-4d19-aa60-8a550b7093ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.040 233728 DEBUG nova.network.neutron [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.082 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] resizing rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.252 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.259 233728 DEBUG nova.objects.instance [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lazy-loading 'migration_context' on Instance uuid 9bef976c-2981-4d19-aa60-8a550b7093ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.273 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.274 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Ensure instance console log exists: /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.274 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.274 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:34 np0005539552 nova_compute[233724]: 2025-11-29 07:53:34.275 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:34.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.164 233728 DEBUG nova.network.neutron [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.188 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Releasing lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.189 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Instance network_info: |[{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.189 233728 DEBUG oslo_concurrency.lockutils [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.189 233728 DEBUG nova.network.neutron [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Refreshing network info cache for port 384b014a-c4e8-4d83-a8d1-09e70342722f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.192 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Start _get_guest_xml network_info=[{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.196 233728 WARNING nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.200 233728 DEBUG nova.virt.libvirt.host [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.201 233728 DEBUG nova.virt.libvirt.host [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.203 233728 DEBUG nova.virt.libvirt.host [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.204 233728 DEBUG nova.virt.libvirt.host [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.205 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.205 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.206 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.206 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.206 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.207 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.207 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.207 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.207 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.208 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.208 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.208 233728 DEBUG nova.virt.hardware [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.211 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:35.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3732747710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.748 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.776 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:35 np0005539552 nova_compute[233724]: 2025-11-29 07:53:35.780 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/360441259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.312 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.314 233728 DEBUG nova.virt.libvirt.vif [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:53:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1241596333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1241596333',id=16,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-ggznma7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:53:31Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=9bef976c-2981-4d19-aa60-8a550b7093ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.314 233728 DEBUG nova.network.os_vif_util [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converting VIF {"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.316 233728 DEBUG nova.network.os_vif_util [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.317 233728 DEBUG nova.objects.instance [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bef976c-2981-4d19-aa60-8a550b7093ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.331 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <uuid>9bef976c-2981-4d19-aa60-8a550b7093ca</uuid>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <name>instance-00000010</name>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1241596333</nova:name>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:53:35</nova:creationTime>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:user uuid="b8f5b14bc98a47f29238140d1d3f1220">tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member</nova:user>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:project uuid="f91d373d1ef64146866ef08735a75efa">tempest-LiveAutoBlockMigrationV225Test-1482931553</nova:project>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <nova:port uuid="384b014a-c4e8-4d83-a8d1-09e70342722f">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <entry name="serial">9bef976c-2981-4d19-aa60-8a550b7093ca</entry>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <entry name="uuid">9bef976c-2981-4d19-aa60-8a550b7093ca</entry>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9bef976c-2981-4d19-aa60-8a550b7093ca_disk">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9bef976c-2981-4d19-aa60-8a550b7093ca_disk.config">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:a8:74:d4"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <target dev="tap384b014a-c4"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/console.log" append="off"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:53:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:53:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:53:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:53:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.333 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Preparing to wait for external event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.333 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.333 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.333 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.334 233728 DEBUG nova.virt.libvirt.vif [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:53:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1241596333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1241596333',id=16,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-ggznma7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:53:31Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=9bef976c-2981-4d19-aa60-8a550b7093ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.334 233728 DEBUG nova.network.os_vif_util [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converting VIF {"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.335 233728 DEBUG nova.network.os_vif_util [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.335 233728 DEBUG os_vif [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.336 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.336 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.337 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.342 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.343 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap384b014a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.343 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap384b014a-c4, col_values=(('external_ids', {'iface-id': '384b014a-c4e8-4d83-a8d1-09e70342722f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:74:d4', 'vm-uuid': '9bef976c-2981-4d19-aa60-8a550b7093ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.344 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539552 NetworkManager[48926]: <info>  [1764402816.3463] manager: (tap384b014a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.347 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.350 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.351 233728 INFO os_vif [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4')#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.391 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.392 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.392 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] No VIF found with MAC fa:16:3e:a8:74:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.393 233728 INFO nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Using config drive#033[00m
Nov 29 02:53:36 np0005539552 nova_compute[233724]: 2025-11-29 07:53:36.422 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.194 233728 INFO nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Creating config drive at /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/disk.config#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.200 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwicftrgu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.326 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwicftrgu" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.353 233728 DEBUG nova.storage.rbd_utils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 9bef976c-2981-4d19-aa60-8a550b7093ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.357 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/disk.config 9bef976c-2981-4d19-aa60-8a550b7093ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:37.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.624 233728 DEBUG oslo_concurrency.processutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/disk.config 9bef976c-2981-4d19-aa60-8a550b7093ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.625 233728 INFO nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Deleting local config drive /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/disk.config because it was imported into RBD.#033[00m
Nov 29 02:53:37 np0005539552 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:53:37 np0005539552 systemd[1]: Started libvirt secret daemon.
Nov 29 02:53:37 np0005539552 kernel: tap384b014a-c4: entered promiscuous mode
Nov 29 02:53:37 np0005539552 NetworkManager[48926]: <info>  [1764402817.7067] manager: (tap384b014a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00056|binding|INFO|Claiming lport 384b014a-c4e8-4d83-a8d1-09e70342722f for this chassis.
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00057|binding|INFO|384b014a-c4e8-4d83-a8d1-09e70342722f: Claiming fa:16:3e:a8:74:d4 10.100.0.6
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00058|binding|INFO|Claiming lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a for this chassis.
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00059|binding|INFO|f3deccbd-dd81-439e-9ba4-ebc80268aa7a: Claiming fa:16:3e:d7:b4:cf 19.80.0.168
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.716 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.724 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:74:d4 10.100.0.6'], port_security=['fa:16:3e:a8:74:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1819185199', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9bef976c-2981-4d19-aa60-8a550b7093ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1819185199', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=384b014a-c4e8-4d83-a8d1-09e70342722f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.726 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b4:cf 19.80.0.168'], port_security=['fa:16:3e:d7:b4:cf 19.80.0.168'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['384b014a-c4e8-4d83-a8d1-09e70342722f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-367077291', 'neutron:cidrs': '19.80.0.168/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-367077291', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=86c926f4-a652-4447-9b71-a6da44a90627, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f3deccbd-dd81-439e-9ba4-ebc80268aa7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.727 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 384b014a-c4e8-4d83-a8d1-09e70342722f in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 bound to our chassis#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.729 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1#033[00m
Nov 29 02:53:37 np0005539552 systemd-machined[196379]: New machine qemu-3-instance-00000010.
Nov 29 02:53:37 np0005539552 systemd-udevd[242536]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.741 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[298115c0-84a8-4087-bbbd-53ebdcfe647e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.742 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad69a0f4-01 in ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.744 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad69a0f4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.744 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1a04fdc2-cbc8-4b0a-aec3-6150a70ca503]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.745 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[21b45c1b-4eb9-4b65-bf0d-c5642f009cf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 systemd[1]: Started Virtual Machine qemu-3-instance-00000010.
Nov 29 02:53:37 np0005539552 NetworkManager[48926]: <info>  [1764402817.7558] device (tap384b014a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:53:37 np0005539552 NetworkManager[48926]: <info>  [1764402817.7567] device (tap384b014a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.758 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f76163-5502-4912-b355-a1375206b553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.759 233728 DEBUG nova.network.neutron [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updated VIF entry in instance network info cache for port 384b014a-c4e8-4d83-a8d1-09e70342722f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.759 233728 DEBUG nova.network.neutron [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.782 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[71516052-1a1d-4f36-97b1-2312b8c23753]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.784 233728 DEBUG oslo_concurrency.lockutils [req-bec7b5b9-857d-455b-aae1-4848da60e1c7 req-8726eb98-a273-4241-b698-abc4080abcbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.811 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[571a42ac-722f-4d45-9e1c-b86eab55ddb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 systemd-udevd[242539]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.818 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5683e7-3889-4177-a8c2-af2737dfa4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 NetworkManager[48926]: <info>  [1764402817.8258] manager: (tapad69a0f4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00060|binding|INFO|Setting lport 384b014a-c4e8-4d83-a8d1-09e70342722f ovn-installed in OVS
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00061|binding|INFO|Setting lport 384b014a-c4e8-4d83-a8d1-09e70342722f up in Southbound
Nov 29 02:53:37 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:37Z|00062|binding|INFO|Setting lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a up in Southbound
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.828 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.844 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9fadc21c-5778-468a-a774-84b5fcab8649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.848 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[56a8d40f-ecfd-46eb-b238-bf8a6e928451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 NetworkManager[48926]: <info>  [1764402817.8683] device (tapad69a0f4-00): carrier: link connected
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.872 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[46eacaa0-64cd-47f5-8805-900f63d56890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.886 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd8c383-b4cd-4577-912a-192906208d90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586355, 'reachable_time': 24280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242568, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.899 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0f19dc-de00-417e-8db6-df9600c025b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:a12d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586355, 'tstamp': 586355}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242569, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.915 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bda3d8ab-1237-40fe-be39-7d825eeecc29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586355, 'reachable_time': 24280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242570, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.942 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b112f6fc-3251-4ae5-a4ca-667ffff6aab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.995 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7feb3648-83e6-4366-86c1-3475e1d1b468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.996 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.996 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:37.996 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad69a0f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:37 np0005539552 nova_compute[233724]: 2025-11-29 07:53:37.998 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:37 np0005539552 NetworkManager[48926]: <info>  [1764402817.9987] manager: (tapad69a0f4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 02:53:37 np0005539552 kernel: tapad69a0f4-00: entered promiscuous mode
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.000 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.001 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad69a0f4-00, col_values=(('external_ids', {'iface-id': '7ffec560-b868-40db-af88-b0deaaa81f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:38Z|00063|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.015 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.016 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.017 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.018 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0c39c8c2-eab8-4ade-9a8a-04a882d5b113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.018 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.019 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'env', 'PROCESS_TAG=haproxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.091 233728 DEBUG nova.compute.manager [req-331a25cc-12f9-44ab-97ec-0596dbc17783 req-1b4cbcd8-74c4-45a2-87f3-b73dfd722c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.092 233728 DEBUG oslo_concurrency.lockutils [req-331a25cc-12f9-44ab-97ec-0596dbc17783 req-1b4cbcd8-74c4-45a2-87f3-b73dfd722c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.092 233728 DEBUG oslo_concurrency.lockutils [req-331a25cc-12f9-44ab-97ec-0596dbc17783 req-1b4cbcd8-74c4-45a2-87f3-b73dfd722c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.092 233728 DEBUG oslo_concurrency.lockutils [req-331a25cc-12f9-44ab-97ec-0596dbc17783 req-1b4cbcd8-74c4-45a2-87f3-b73dfd722c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.092 233728 DEBUG nova.compute.manager [req-331a25cc-12f9-44ab-97ec-0596dbc17783 req-1b4cbcd8-74c4-45a2-87f3-b73dfd722c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Processing event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:53:38 np0005539552 podman[242601]: 2025-11-29 07:53:38.371310723 +0000 UTC m=+0.049258871 container create 7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:53:38 np0005539552 systemd[1]: Started libpod-conmon-7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e.scope.
Nov 29 02:53:38 np0005539552 podman[242601]: 2025-11-29 07:53:38.345281865 +0000 UTC m=+0.023230033 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:38 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:53:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7689690edfdd0776a1454d4571e25b15bcd6bb6e79e4a380147f341e56290c75/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:38 np0005539552 podman[242601]: 2025-11-29 07:53:38.464621512 +0000 UTC m=+0.142569690 container init 7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:53:38 np0005539552 podman[242601]: 2025-11-29 07:53:38.470477191 +0000 UTC m=+0.148425339 container start 7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:53:38 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [NOTICE]   (242620) : New worker (242622) forked
Nov 29 02:53:38 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [NOTICE]   (242620) : Loading success.
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.537 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f3deccbd-dd81-439e-9ba4-ebc80268aa7a in datapath 54dd896a-93d6-4056-93b9-fe4c87eb0b97 unbound from our chassis#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.540 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54dd896a-93d6-4056-93b9-fe4c87eb0b97#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.552 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[db0accde-5add-4f3f-a5a5-f505d646f1a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.553 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54dd896a-91 in ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.555 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54dd896a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.555 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[30183c82-e46a-4a08-8e9b-5645c3d4c0a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.556 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e27a0ead-8c24-4cc3-a4e1-771f0cf76166]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.571 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[6bbffe27-bed7-4df6-89e4-cd78a500607f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.585 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[56cbb932-5d59-41d8-96ec-4f2b8dc052b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.609 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cbb434-a414-4b65-a885-ebeb6947f11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.615 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4f584f-d540-4bb6-aa5a-8b64d29701e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 NetworkManager[48926]: <info>  [1764402818.6161] manager: (tap54dd896a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 02:53:38 np0005539552 systemd-udevd[242551]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:38.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.652 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[39403cb9-5b1f-4306-be4f-0036fa42b6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.656 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3f326aec-2c8b-4b56-8269-064633e8d6bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 NetworkManager[48926]: <info>  [1764402818.6771] device (tap54dd896a-90): carrier: link connected
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.681 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[04a32118-cadd-4545-9de9-493cf4028370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.700 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a6b8a6-a32d-4ec8-b9e0-fb115e7bf3ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54dd896a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:bb:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586436, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242643, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.715 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[42ff31ba-8232-4d00-84b3-97f4377be6ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:bbff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586436, 'tstamp': 586436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242644, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.731 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[acf41782-bc1f-432a-9561-b80d8a87c370]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54dd896a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:bb:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586436, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242645, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.759 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c67ac15a-3116-4a2e-97d3-6f7d95e495b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.811 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1665da9c-9a08-4e65-8893-258562e0c108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.813 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54dd896a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.813 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.813 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54dd896a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:38 np0005539552 NetworkManager[48926]: <info>  [1764402818.8158] manager: (tap54dd896a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 kernel: tap54dd896a-90: entered promiscuous mode
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.817 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54dd896a-90, col_values=(('external_ids', {'iface-id': 'a9eac8df-57ef-4a9b-91fd-9eb356860a2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:38 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:38Z|00064|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.818 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 nova_compute[233724]: 2025-11-29 07:53:38.833 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.834 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54dd896a-93d6-4056-93b9-fe4c87eb0b97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54dd896a-93d6-4056-93b9-fe4c87eb0b97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.834 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1f44434c-b365-4cf5-8c97-97707cf50060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.835 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-54dd896a-93d6-4056-93b9-fe4c87eb0b97
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/54dd896a-93d6-4056-93b9-fe4c87eb0b97.pid.haproxy
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 54dd896a-93d6-4056-93b9-fe4c87eb0b97
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:38.836 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'env', 'PROCESS_TAG=haproxy-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54dd896a-93d6-4056-93b9-fe4c87eb0b97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:53:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3196384587' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:53:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:53:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3196384587' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:53:39 np0005539552 podman[242713]: 2025-11-29 07:53:39.226666033 +0000 UTC m=+0.058189484 container create 3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.228 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:39 np0005539552 systemd[1]: Started libpod-conmon-3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721.scope.
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.279 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.280 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402819.2785604, 9bef976c-2981-4d19-aa60-8a550b7093ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.280 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:39 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.285 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.288 233728 INFO nova.virt.libvirt.driver [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Instance spawned successfully.#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.288 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:53:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a67cf37709a6fb6612dc99a4b8d89b82f3dd2d00f012fd10c32b0fc4a938c7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:39 np0005539552 podman[242713]: 2025-11-29 07:53:39.2008084 +0000 UTC m=+0.032331851 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:39 np0005539552 podman[242713]: 2025-11-29 07:53:39.302877136 +0000 UTC m=+0.134400587 container init 3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:53:39 np0005539552 podman[242713]: 2025-11-29 07:53:39.308878599 +0000 UTC m=+0.140402030 container start 3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.309 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.312 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.312 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.313 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.313 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.313 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.314 233728 DEBUG nova.virt.libvirt.driver [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.318 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:39 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [NOTICE]   (242739) : New worker (242741) forked
Nov 29 02:53:39 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [NOTICE]   (242739) : Loading success.
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.344 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.344 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402819.2787867, 9bef976c-2981-4d19-aa60-8a550b7093ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.344 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.375 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.377 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402819.2841864, 9bef976c-2981-4d19-aa60-8a550b7093ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.377 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.387 233728 INFO nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.387 233728 DEBUG nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.410 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.417 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.442 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:53:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:39.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.452 233728 INFO nova.compute.manager [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Took 8.68 seconds to build instance.#033[00m
Nov 29 02:53:39 np0005539552 nova_compute[233724]: 2025-11-29 07:53:39.470 233728 DEBUG oslo_concurrency.lockutils [None req-989f568e-923d-4dfc-a1b2-2cf721d34930 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:40 np0005539552 nova_compute[233724]: 2025-11-29 07:53:40.174 233728 DEBUG nova.compute.manager [req-85848e40-ae5e-4aa3-9889-263cde118c84 req-818fbf4b-d9bf-474f-af05-108482f42e0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:40 np0005539552 nova_compute[233724]: 2025-11-29 07:53:40.175 233728 DEBUG oslo_concurrency.lockutils [req-85848e40-ae5e-4aa3-9889-263cde118c84 req-818fbf4b-d9bf-474f-af05-108482f42e0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:40 np0005539552 nova_compute[233724]: 2025-11-29 07:53:40.175 233728 DEBUG oslo_concurrency.lockutils [req-85848e40-ae5e-4aa3-9889-263cde118c84 req-818fbf4b-d9bf-474f-af05-108482f42e0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:40 np0005539552 nova_compute[233724]: 2025-11-29 07:53:40.175 233728 DEBUG oslo_concurrency.lockutils [req-85848e40-ae5e-4aa3-9889-263cde118c84 req-818fbf4b-d9bf-474f-af05-108482f42e0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:40 np0005539552 nova_compute[233724]: 2025-11-29 07:53:40.175 233728 DEBUG nova.compute.manager [req-85848e40-ae5e-4aa3-9889-263cde118c84 req-818fbf4b-d9bf-474f-af05-108482f42e0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:40 np0005539552 nova_compute[233724]: 2025-11-29 07:53:40.176 233728 WARNING nova.compute.manager [req-85848e40-ae5e-4aa3-9889-263cde118c84 req-818fbf4b-d9bf-474f-af05-108482f42e0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received unexpected event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with vm_state active and task_state None.#033[00m
Nov 29 02:53:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:40.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:41 np0005539552 nova_compute[233724]: 2025-11-29 07:53:41.345 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:41.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:53:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950057900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:53:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:53:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950057900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:53:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:42.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:43.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:43 np0005539552 nova_compute[233724]: 2025-11-29 07:53:43.618 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Check if temp file /var/lib/nova/instances/tmp3464m1ig exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:53:43 np0005539552 nova_compute[233724]: 2025-11-29 07:53:43.619 233728 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='9bef976c-2981-4d19-aa60-8a550b7093ca',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:53:44 np0005539552 nova_compute[233724]: 2025-11-29 07:53:44.233 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:44.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:45 np0005539552 nova_compute[233724]: 2025-11-29 07:53:45.301 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:45 np0005539552 nova_compute[233724]: 2025-11-29 07:53:45.302 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:45 np0005539552 nova_compute[233724]: 2025-11-29 07:53:45.307 233728 INFO nova.compute.rpcapi [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Nov 29 02:53:45 np0005539552 nova_compute[233724]: 2025-11-29 07:53:45.308 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:46 np0005539552 nova_compute[233724]: 2025-11-29 07:53:46.348 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:46.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:47.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.235 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.377 233728 DEBUG nova.compute.manager [req-8f337167-35a3-49a4-b642-a54aa0c93597 req-4e0a1c9a-84cf-4eb5-981b-6d804ff6176e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.378 233728 DEBUG oslo_concurrency.lockutils [req-8f337167-35a3-49a4-b642-a54aa0c93597 req-4e0a1c9a-84cf-4eb5-981b-6d804ff6176e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.378 233728 DEBUG oslo_concurrency.lockutils [req-8f337167-35a3-49a4-b642-a54aa0c93597 req-4e0a1c9a-84cf-4eb5-981b-6d804ff6176e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.378 233728 DEBUG oslo_concurrency.lockutils [req-8f337167-35a3-49a4-b642-a54aa0c93597 req-4e0a1c9a-84cf-4eb5-981b-6d804ff6176e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.379 233728 DEBUG nova.compute.manager [req-8f337167-35a3-49a4-b642-a54aa0c93597 req-4e0a1c9a-84cf-4eb5-981b-6d804ff6176e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.379 233728 DEBUG nova.compute.manager [req-8f337167-35a3-49a4-b642-a54aa0c93597 req-4e0a1c9a-84cf-4eb5-981b-6d804ff6176e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068597136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068597136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:53:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:49.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.951 233728 INFO nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Took 4.65 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.951 233728 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.969 233728 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='9bef976c-2981-4d19-aa60-8a550b7093ca',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(82482b02-695c-4ea4-8a6d-c7ed11d4be01),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.973 233728 DEBUG nova.objects.instance [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bef976c-2981-4d19-aa60-8a550b7093ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.974 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.976 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.976 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.996 233728 DEBUG nova.virt.libvirt.vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1241596333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1241596333',id=16,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-ggznma7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:39Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=9bef976c-2981-4d19-aa60-8a550b7093ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.996 233728 DEBUG nova.network.os_vif_util [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.997 233728 DEBUG nova.network.os_vif_util [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.997 233728 DEBUG nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:53:49 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:a8:74:d4"/>
Nov 29 02:53:49 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 02:53:49 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:53:49 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 02:53:49 np0005539552 nova_compute[233724]:  <target dev="tap384b014a-c4"/>
Nov 29 02:53:49 np0005539552 nova_compute[233724]: </interface>
Nov 29 02:53:49 np0005539552 nova_compute[233724]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:53:49 np0005539552 nova_compute[233724]: 2025-11-29 07:53:49.999 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:53:50 np0005539552 nova_compute[233724]: 2025-11-29 07:53:50.480 233728 DEBUG nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:50 np0005539552 nova_compute[233724]: 2025-11-29 07:53:50.480 233728 INFO nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:53:50 np0005539552 nova_compute[233724]: 2025-11-29 07:53:50.549 233728 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:53:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.053 233728 DEBUG nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.055 233728 DEBUG nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:53:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.351 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.415 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402831.4150689, 9bef976c-2981-4d19-aa60-8a550b7093ca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.416 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.449 233728 DEBUG nova.compute.manager [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.449 233728 DEBUG oslo_concurrency.lockutils [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.450 233728 DEBUG oslo_concurrency.lockutils [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.450 233728 DEBUG oslo_concurrency.lockutils [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.450 233728 DEBUG nova.compute.manager [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.451 233728 WARNING nova.compute.manager [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received unexpected event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.451 233728 DEBUG nova.compute.manager [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-changed-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.451 233728 DEBUG nova.compute.manager [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Refreshing instance network info cache due to event network-changed-384b014a-c4e8-4d83-a8d1-09e70342722f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.452 233728 DEBUG oslo_concurrency.lockutils [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.452 233728 DEBUG oslo_concurrency.lockutils [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.452 233728 DEBUG nova.network.neutron [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Refreshing network info cache for port 384b014a-c4e8-4d83-a8d1-09e70342722f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.455 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.460 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:53:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:51.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.480 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.557 233728 DEBUG nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.558 233728 DEBUG nova.virt.libvirt.migration [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:53:51 np0005539552 kernel: tap384b014a-c4 (unregistering): left promiscuous mode
Nov 29 02:53:51 np0005539552 NetworkManager[48926]: <info>  [1764402831.6035] device (tap384b014a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00065|binding|INFO|Releasing lport 384b014a-c4e8-4d83-a8d1-09e70342722f from this chassis (sb_readonly=0)
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00066|binding|INFO|Setting lport 384b014a-c4e8-4d83-a8d1-09e70342722f down in Southbound
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00067|binding|INFO|Releasing lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a from this chassis (sb_readonly=0)
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00068|binding|INFO|Setting lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a down in Southbound
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00069|binding|INFO|Removing iface tap384b014a-c4 ovn-installed in OVS
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.620 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00070|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:53:51 np0005539552 ovn_controller[133798]: 2025-11-29T07:53:51Z|00071|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.627 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:74:d4 10.100.0.6'], port_security=['fa:16:3e:a8:74:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1819185199', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9bef976c-2981-4d19-aa60-8a550b7093ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1819185199', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '8', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=384b014a-c4e8-4d83-a8d1-09e70342722f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.631 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b4:cf 19.80.0.168'], port_security=['fa:16:3e:d7:b4:cf 19.80.0.168'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['384b014a-c4e8-4d83-a8d1-09e70342722f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-367077291', 'neutron:cidrs': '19.80.0.168/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-367077291', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=86c926f4-a652-4447-9b71-a6da44a90627, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f3deccbd-dd81-439e-9ba4-ebc80268aa7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.633 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 384b014a-c4e8-4d83-a8d1-09e70342722f in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 unbound from our chassis#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.636 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.637 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b84a9f5d-07c6-44d6-a028-0ad61dca20fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.638 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace which is not needed anymore#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.650 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.719 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 29 02:53:51 np0005539552 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000010.scope: Consumed 14.095s CPU time.
Nov 29 02:53:51 np0005539552 systemd-machined[196379]: Machine qemu-3-instance-00000010 terminated.
Nov 29 02:53:51 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [NOTICE]   (242620) : haproxy version is 2.8.14-c23fe91
Nov 29 02:53:51 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [NOTICE]   (242620) : path to executable is /usr/sbin/haproxy
Nov 29 02:53:51 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [WARNING]  (242620) : Exiting Master process...
Nov 29 02:53:51 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [ALERT]    (242620) : Current worker (242622) exited with code 143 (Terminated)
Nov 29 02:53:51 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[242616]: [WARNING]  (242620) : All workers exited. Exiting... (0)
Nov 29 02:53:51 np0005539552 virtqemud[233098]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/9bef976c-2981-4d19-aa60-8a550b7093ca_disk: No such file or directory
Nov 29 02:53:51 np0005539552 virtqemud[233098]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/9bef976c-2981-4d19-aa60-8a550b7093ca_disk: No such file or directory
Nov 29 02:53:51 np0005539552 systemd[1]: libpod-7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e.scope: Deactivated successfully.
Nov 29 02:53:51 np0005539552 conmon[242616]: conmon 7663e6de3ec63654a56a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e.scope/container/memory.events
Nov 29 02:53:51 np0005539552 podman[242966]: 2025-11-29 07:53:51.78226079 +0000 UTC m=+0.051222105 container died 7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.782 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.787 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.799 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.799 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.800 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:53:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e-userdata-shm.mount: Deactivated successfully.
Nov 29 02:53:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay-7689690edfdd0776a1454d4571e25b15bcd6bb6e79e4a380147f341e56290c75-merged.mount: Deactivated successfully.
Nov 29 02:53:51 np0005539552 podman[242966]: 2025-11-29 07:53:51.834355507 +0000 UTC m=+0.103316852 container cleanup 7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:51 np0005539552 systemd[1]: libpod-conmon-7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e.scope: Deactivated successfully.
Nov 29 02:53:51 np0005539552 podman[243007]: 2025-11-29 07:53:51.905897283 +0000 UTC m=+0.042917108 container remove 7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.911 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8ecf38-282d-440d-849c-e38346b5b543]: (4, ('Sat Nov 29 07:53:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e)\n7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e\nSat Nov 29 07:53:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e)\n7663e6de3ec63654a56a3bbbd0a75a3e059bbccd5559681c5bfcdcca93d6ec5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.913 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbbbf38-5d7e-47ad-957e-41490729e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.915 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.918 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 kernel: tapad69a0f4-00: left promiscuous mode
Nov 29 02:53:51 np0005539552 nova_compute[233724]: 2025-11-29 07:53:51.934 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.937 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0c652f-13e7-472d-9cff-f06933aeb234]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.952 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8883bdbe-c346-453b-abe3-da122d0d380e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.952 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d6116313-6ae6-45a8-affb-94135038ecba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.966 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[551c0fa5-b64a-495f-bcd0-9584a6b1a207]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586349, 'reachable_time': 37723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243027, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 systemd[1]: run-netns-ovnmeta\x2dad69a0f4\x2d0000\x2d474b\x2d9649\x2d72cf1bf9f5c1.mount: Deactivated successfully.
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.971 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.971 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9dbc91-46be-4f51-876d-9c03c475f13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.972 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f3deccbd-dd81-439e-9ba4-ebc80268aa7a in datapath 54dd896a-93d6-4056-93b9-fe4c87eb0b97 unbound from our chassis#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.974 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54dd896a-93d6-4056-93b9-fe4c87eb0b97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.975 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79094341-7dbe-470a-86e4-9fd348ced3c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:51.975 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 namespace which is not needed anymore#033[00m
Nov 29 02:53:52 np0005539552 nova_compute[233724]: 2025-11-29 07:53:52.433 233728 DEBUG nova.virt.libvirt.guest [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '9bef976c-2981-4d19-aa60-8a550b7093ca' (instance-00000010) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:53:52 np0005539552 nova_compute[233724]: 2025-11-29 07:53:52.434 233728 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migration operation has completed#033[00m
Nov 29 02:53:52 np0005539552 nova_compute[233724]: 2025-11-29 07:53:52.434 233728 INFO nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] _post_live_migration() is started..#033[00m
Nov 29 02:53:52 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [NOTICE]   (242739) : haproxy version is 2.8.14-c23fe91
Nov 29 02:53:52 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [NOTICE]   (242739) : path to executable is /usr/sbin/haproxy
Nov 29 02:53:52 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [WARNING]  (242739) : Exiting Master process...
Nov 29 02:53:52 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [ALERT]    (242739) : Current worker (242741) exited with code 143 (Terminated)
Nov 29 02:53:52 np0005539552 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[242735]: [WARNING]  (242739) : All workers exited. Exiting... (0)
Nov 29 02:53:52 np0005539552 systemd[1]: libpod-3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539552 podman[243045]: 2025-11-29 07:53:52.454122527 +0000 UTC m=+0.391078830 container died 3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:53:52 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721-userdata-shm.mount: Deactivated successfully.
Nov 29 02:53:52 np0005539552 systemd[1]: var-lib-containers-storage-overlay-9a67cf37709a6fb6612dc99a4b8d89b82f3dd2d00f012fd10c32b0fc4a938c7f-merged.mount: Deactivated successfully.
Nov 29 02:53:52 np0005539552 podman[243045]: 2025-11-29 07:53:52.516311489 +0000 UTC m=+0.453267752 container cleanup 3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:53:52 np0005539552 systemd[1]: libpod-conmon-3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539552 podman[243077]: 2025-11-29 07:53:52.579239061 +0000 UTC m=+0.041900881 container remove 3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.584 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[46dc2ece-0e71-4e31-b363-3a3479197a45]: (4, ('Sat Nov 29 07:53:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 (3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721)\n3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721\nSat Nov 29 07:53:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 (3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721)\n3b8319cb199a7e39583e0eb9ea9a12b1cc54c8ffb32c5e91f84da236fc1d4721\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.585 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb5649d-8935-47af-a3ba-be3f01d9dc70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.586 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54dd896a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:52 np0005539552 nova_compute[233724]: 2025-11-29 07:53:52.587 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539552 kernel: tap54dd896a-90: left promiscuous mode
Nov 29 02:53:52 np0005539552 nova_compute[233724]: 2025-11-29 07:53:52.605 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.608 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cb299829-d6df-482f-b835-5514696c9bad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.626 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[84b8af97-576e-43c5-8372-71b4a8ec4c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.627 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c25913c9-5ef0-4972-9810-42987381973d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.644 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e4592a67-153a-4751-814a-4709335e6238]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586429, 'reachable_time': 35429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243097, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.646 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:53:52.646 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5eb66f-1d85-4f05-bf00-3ece2575461c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:52 np0005539552 systemd[1]: run-netns-ovnmeta\x2d54dd896a\x2d93d6\x2d4056\x2d93b9\x2dfe4c87eb0b97.mount: Deactivated successfully.
Nov 29 02:53:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:53.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.110 233728 DEBUG nova.network.neutron [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Activated binding for port 384b014a-c4e8-4d83-a8d1-09e70342722f and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.111 233728 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.112 233728 DEBUG nova.virt.libvirt.vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1241596333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1241596333',id=16,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-ggznma7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:43Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=9bef976c-2981-4d19-aa60-8a550b7093ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.113 233728 DEBUG nova.network.os_vif_util [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.115 233728 DEBUG nova.network.os_vif_util [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.116 233728 DEBUG os_vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.118 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384b014a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.119 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.121 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.123 233728 INFO os_vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4')#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.124 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.124 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.124 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.125 233728 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.125 233728 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Deleting instance files /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca_del#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.126 233728 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Deletion of /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca_del complete#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.239 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.506 233728 DEBUG nova.network.neutron [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updated VIF entry in instance network info cache for port 384b014a-c4e8-4d83-a8d1-09e70342722f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.507 233728 DEBUG nova.network.neutron [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.557 233728 DEBUG oslo_concurrency.lockutils [req-9fc7c1bb-56ab-42cb-811f-b1d5370ba42c req-3e2b792e-3f4f-44f5-9390-9d5b39f54bcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.907 233728 DEBUG nova.compute.manager [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.908 233728 DEBUG oslo_concurrency.lockutils [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.908 233728 DEBUG oslo_concurrency.lockutils [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.908 233728 DEBUG oslo_concurrency.lockutils [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.908 233728 DEBUG nova.compute.manager [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:54 np0005539552 nova_compute[233724]: 2025-11-29 07:53:54.909 233728 WARNING nova.compute.manager [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received unexpected event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:55 np0005539552 nova_compute[233724]: 2025-11-29 07:53:55.252 233728 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:55 np0005539552 nova_compute[233724]: 2025-11-29 07:53:55.253 233728 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:55 np0005539552 nova_compute[233724]: 2025-11-29 07:53:55.254 233728 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:55 np0005539552 nova_compute[233724]: 2025-11-29 07:53:55.254 233728 DEBUG oslo_concurrency.lockutils [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:55 np0005539552 nova_compute[233724]: 2025-11-29 07:53:55.255 233728 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:55 np0005539552 nova_compute[233724]: 2025-11-29 07:53:55.255 233728 DEBUG nova.compute.manager [req-f4177c69-21d7-411b-9f39-8a609a0c3cd7 req-39a2bf49-9dd8-4cb5-980c-be3964d2a25c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:55.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:55 np0005539552 podman[243099]: 2025-11-29 07:53:55.980382277 +0000 UTC m=+0.067537538 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:53:56 np0005539552 podman[243100]: 2025-11-29 07:53:56.017547738 +0000 UTC m=+0.094410219 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:53:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:56.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:56.999 233728 DEBUG nova.compute.manager [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.000 233728 DEBUG oslo_concurrency.lockutils [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.000 233728 DEBUG oslo_concurrency.lockutils [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.001 233728 DEBUG oslo_concurrency.lockutils [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.001 233728 DEBUG nova.compute.manager [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.001 233728 WARNING nova.compute.manager [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received unexpected event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.001 233728 DEBUG nova.compute.manager [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.002 233728 DEBUG oslo_concurrency.lockutils [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.002 233728 DEBUG oslo_concurrency.lockutils [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.002 233728 DEBUG oslo_concurrency.lockutils [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.003 233728 DEBUG nova.compute.manager [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:57 np0005539552 nova_compute[233724]: 2025-11-29 07:53:57.003 233728 WARNING nova.compute.manager [req-db6d1fe0-25e2-4acf-9203-e376de0eeb8a req-0fca2245-7c41-4f82-878d-d217174076f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received unexpected event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:53:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:57.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:58 np0005539552 podman[243190]: 2025-11-29 07:53:58.037464351 +0000 UTC m=+0.127303765 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:53:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:58 np0005539552 nova_compute[233724]: 2025-11-29 07:53:58.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:58 np0005539552 nova_compute[233724]: 2025-11-29 07:53:58.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:53:59 np0005539552 nova_compute[233724]: 2025-11-29 07:53:59.120 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:59 np0005539552 nova_compute[233724]: 2025-11-29 07:53:59.241 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:53:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:59.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.138 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.139 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.139 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.176 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.176 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.177 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.177 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.177 233728 DEBUG oslo_concurrency.processutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1282798557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.625 233728 DEBUG oslo_concurrency.processutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:00.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.822 233728 WARNING nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.823 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4936MB free_disk=20.89746856689453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.823 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:00 np0005539552 nova_compute[233724]: 2025-11-29 07:54:00.823 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.077 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration for instance 9bef976c-2981-4d19-aa60-8a550b7093ca refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.122 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.171 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration 82482b02-695c-4ea4-8a6d-c7ed11d4be01 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.171 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.172 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.218 233728 DEBUG oslo_concurrency.processutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:54:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:01.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:54:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3285628902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.638 233728 DEBUG oslo_concurrency.processutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.647 233728 DEBUG nova.compute.provider_tree [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.665 233728 DEBUG nova.scheduler.client.report [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.695 233728 DEBUG nova.compute.resource_tracker [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.696 233728 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.701 233728 INFO nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.809 233728 INFO nova.scheduler.client.report [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Deleted allocation for migration 82482b02-695c-4ea4-8a6d-c7ed11d4be01#033[00m
Nov 29 02:54:01 np0005539552 nova_compute[233724]: 2025-11-29 07:54:01.810 233728 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:54:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:02.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:03.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:03 np0005539552 nova_compute[233724]: 2025-11-29 07:54:03.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:04 np0005539552 nova_compute[233724]: 2025-11-29 07:54:04.122 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:04 np0005539552 nova_compute[233724]: 2025-11-29 07:54:04.243 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:04.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539552 nova_compute[233724]: 2025-11-29 07:54:04.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:04 np0005539552 nova_compute[233724]: 2025-11-29 07:54:04.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:54:04 np0005539552 nova_compute[233724]: 2025-11-29 07:54:04.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:54:04 np0005539552 nova_compute[233724]: 2025-11-29 07:54:04.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:54:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:05.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:05 np0005539552 nova_compute[233724]: 2025-11-29 07:54:05.936 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:06.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:06 np0005539552 nova_compute[233724]: 2025-11-29 07:54:06.796 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402831.7945752, 9bef976c-2981-4d19-aa60-8a550b7093ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:06 np0005539552 nova_compute[233724]: 2025-11-29 07:54:06.796 233728 INFO nova.compute.manager [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:54:06 np0005539552 nova_compute[233724]: 2025-11-29 07:54:06.816 233728 DEBUG nova.compute.manager [None req-5527999f-ae13-4714-897b-206fcbd5e06a - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:06 np0005539552 nova_compute[233724]: 2025-11-29 07:54:06.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:06 np0005539552 nova_compute[233724]: 2025-11-29 07:54:06.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:07.104 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:07 np0005539552 nova_compute[233724]: 2025-11-29 07:54:07.104 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:07.105 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:07.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:07 np0005539552 nova_compute[233724]: 2025-11-29 07:54:07.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:07 np0005539552 nova_compute[233724]: 2025-11-29 07:54:07.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:08.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:08 np0005539552 nova_compute[233724]: 2025-11-29 07:54:08.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:08 np0005539552 nova_compute[233724]: 2025-11-29 07:54:08.937 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:08 np0005539552 nova_compute[233724]: 2025-11-29 07:54:08.938 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:54:08 np0005539552 nova_compute[233724]: 2025-11-29 07:54:08.938 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.123 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.284 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.954 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:09 np0005539552 nova_compute[233724]: 2025-11-29 07:54:09.954 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1968772114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.370 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.541 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.542 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4951MB free_disk=20.897281646728516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.542 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.543 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.803 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.804 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:10 np0005539552 nova_compute[233724]: 2025-11-29 07:54:10.821 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1391016720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.237 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.244 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.354 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.357 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.358 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.359 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.359 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:54:11 np0005539552 nova_compute[233724]: 2025-11-29 07:54:11.376 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:54:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 02:54:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:11.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 02:54:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:14 np0005539552 nova_compute[233724]: 2025-11-29 07:54:14.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:14 np0005539552 nova_compute[233724]: 2025-11-29 07:54:14.287 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:17.106 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:17.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:18.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:19 np0005539552 nova_compute[233724]: 2025-11-29 07:54:19.129 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:19 np0005539552 nova_compute[233724]: 2025-11-29 07:54:19.290 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:19.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.420 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.421 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.430 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.430 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.449 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.452 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.553 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.553 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.566 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.566 233728 INFO nova.compute.claims [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:54:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:20.602 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:20.603 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:20.603 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:20 np0005539552 nova_compute[233724]: 2025-11-29 07:54:20.660 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:21 np0005539552 nova_compute[233724]: 2025-11-29 07:54:21.491 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3474179625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:21 np0005539552 nova_compute[233724]: 2025-11-29 07:54:21.942 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:21 np0005539552 nova_compute[233724]: 2025-11-29 07:54:21.948 233728 DEBUG nova.compute.provider_tree [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:21 np0005539552 nova_compute[233724]: 2025-11-29 07:54:21.971 233728 DEBUG nova.scheduler.client.report [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:21 np0005539552 nova_compute[233724]: 2025-11-29 07:54:21.998 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:21 np0005539552 nova_compute[233724]: 2025-11-29 07:54:21.999 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.001 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.008 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.008 233728 INFO nova.compute.claims [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.068 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.068 233728 DEBUG nova.network.neutron [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.089 233728 INFO nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.108 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.176 233728 INFO nova.virt.block_device [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Booting with volume e52d8ac1-8970-4cf0-9aa0-795f616090d0 at /dev/vda#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.183 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.378 233728 DEBUG nova.policy [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8f5b14bc98a47f29238140d1d3f1220', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f91d373d1ef64146866ef08735a75efa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.390 233728 DEBUG os_brick.utils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.392 233728 INFO oslo.privsep.daemon [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpct7kzjhc/privsep.sock']#033[00m
Nov 29 02:54:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2843209456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.608 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.614 233728 DEBUG nova.compute.provider_tree [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.636 233728 DEBUG nova.scheduler.client.report [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.669 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.670 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.730 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.731 233728 DEBUG nova.network.neutron [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.763 233728 INFO nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.779 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:22 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.824 233728 INFO nova.virt.block_device [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Booting with volume 3ef07b78-0409-49cf-a941-8a19b02dd939 at /dev/vda#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.003 233728 DEBUG nova.policy [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85f5548e01234fe4ae9b88e998e943f8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1963a097b7694450aa0d7c30b27b38ac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.019 233728 DEBUG os_brick.utils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.120 233728 INFO oslo.privsep.daemon [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.991 243418 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.994 243418 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.996 243418 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:22.997 243418 INFO oslo.privsep.daemon [-] privsep daemon running as pid 243418#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.124 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e394dcc8-c450-47a5-b5d0-1515c57ca55e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.125 233728 WARNING oslo_privsep.priv_context [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] privsep daemon already running#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.239 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.239 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.252 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.253 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[34a9abcb-923d-4864-9e00-0070b350b9a1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.254 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.257 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.258 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4fa479-0bf3-42b8-942c-f94c2b2ded84]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.258 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.262 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.262 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6997c0b-dca4-41ac-b8ef-79aec0c77f25]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.264 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.270 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.270 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[017be733-bd11-4ad4-90f5-bd2fd1e8dfcc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.271 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.277 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.277 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[9417f899-0f24-4376-a179-dbdf81df859c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.279 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[c5077a57-6dfa-4b9d-81b5-b5e037ca6dd7]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.280 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.280 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.280 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[50d6e4a0-cace-4daa-b7e5-ca7482bfe4ba]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.301 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.301 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[a78a79ac-7f97-4b53-b919-6a8a67bbe49b]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.305 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.325 233728 DEBUG os_brick.initiator.connectors.lightos [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.326 233728 DEBUG os_brick.initiator.connectors.lightos [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.326 233728 DEBUG os_brick.initiator.connectors.lightos [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.327 233728 DEBUG os_brick.utils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] <== get_connector_properties: return (935ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.327 233728 DEBUG nova.virt.block_device [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating existing volume attachment record: 086dffa8-4128-4c55-89ad-f4a779ee7ea0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.331 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.332 233728 DEBUG os_brick.initiator.connectors.lightos [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.332 233728 DEBUG os_brick.initiator.connectors.lightos [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.332 233728 DEBUG os_brick.initiator.connectors.lightos [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.333 233728 DEBUG os_brick.utils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] <== get_connector_properties: return (313ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.333 233728 DEBUG nova.virt.block_device [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating existing volume attachment record: 73064fe3-3d3a-4388-bf17-21bd966882ad _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:54:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.862 233728 DEBUG nova.network.neutron [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Successfully created port: 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:23 np0005539552 nova_compute[233724]: 2025-11-29 07:54:23.903 233728 DEBUG nova.network.neutron [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Successfully created port: d1330295-51bc-4e64-a620-b63a6d8777fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.131 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.328 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.483 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.485 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.485 233728 INFO nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating image(s)#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.486 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.487 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Ensure instance console log exists: /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.487 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.488 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.488 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.491 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.493 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.494 233728 INFO nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating image(s)#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.494 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.495 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Ensure instance console log exists: /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.496 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.496 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:24 np0005539552 nova_compute[233724]: 2025-11-29 07:54:24.497 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:24.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.186 233728 DEBUG nova.network.neutron [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Successfully updated port: 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.200 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.201 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.201 233728 DEBUG nova.network.neutron [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.352 233728 DEBUG nova.compute.manager [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-changed-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.353 233728 DEBUG nova.compute.manager [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Refreshing instance network info cache due to event network-changed-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.353 233728 DEBUG oslo_concurrency.lockutils [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.404 233728 DEBUG nova.network.neutron [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.770 233728 DEBUG nova.network.neutron [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Successfully updated port: d1330295-51bc-4e64-a620-b63a6d8777fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.789 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.790 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:25 np0005539552 nova_compute[233724]: 2025-11-29 07:54:25.790 233728 DEBUG nova.network.neutron [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.038 233728 DEBUG nova.network.neutron [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.442 233728 DEBUG nova.network.neutron [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.477 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.477 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance network_info: |[{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.478 233728 DEBUG oslo_concurrency.lockutils [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.478 233728 DEBUG nova.network.neutron [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Refreshing network info cache for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.480 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Start _get_guest_xml network_info=[{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3ef07b78-0409-49cf-a941-8a19b02dd939', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3ef07b78-0409-49cf-a941-8a19b02dd939', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'attached_at': '', 'detached_at': '', 'volume_id': '3ef07b78-0409-49cf-a941-8a19b02dd939', 'serial': '3ef07b78-0409-49cf-a941-8a19b02dd939'}, 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '73064fe3-3d3a-4388-bf17-21bd966882ad', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.484 233728 WARNING nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.489 233728 DEBUG nova.virt.libvirt.host [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.490 233728 DEBUG nova.virt.libvirt.host [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.495 233728 DEBUG nova.virt.libvirt.host [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.495 233728 DEBUG nova.virt.libvirt.host [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.497 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.497 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.498 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.498 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.498 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.498 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.498 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.499 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.499 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.499 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.499 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.499 233728 DEBUG nova.virt.hardware [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.525 233728 DEBUG nova.storage.rbd_utils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] rbd image 7067462a-37a6-458e-b96c-76adcea5fdfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.529 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.700 233728 DEBUG nova.compute.manager [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-changed-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.701 233728 DEBUG nova.compute.manager [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Refreshing instance network info cache due to event network-changed-d1330295-51bc-4e64-a620-b63a6d8777fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:26 np0005539552 nova_compute[233724]: 2025-11-29 07:54:26.702 233728 DEBUG oslo_concurrency.lockutils [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:26.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1305043553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:26 np0005539552 podman[243526]: 2025-11-29 07:54:26.958674856 +0000 UTC m=+0.050120815 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:54:26 np0005539552 podman[243525]: 2025-11-29 07:54:26.961993126 +0000 UTC m=+0.053087955 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.140 233728 DEBUG nova.network.neutron [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.377 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.848s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.379 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.379 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.381 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.388 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.389 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance network_info: |[{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.389 233728 DEBUG oslo_concurrency.lockutils [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.390 233728 DEBUG nova.network.neutron [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Refreshing network info cache for port d1330295-51bc-4e64-a620-b63a6d8777fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.393 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Start _get_guest_xml network_info=[{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e52d8ac1-8970-4cf0-9aa0-795f616090d0', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e52d8ac1-8970-4cf0-9aa0-795f616090d0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'attached_at': '', 'detached_at': '', 'volume_id': 'e52d8ac1-8970-4cf0-9aa0-795f616090d0', 'serial': 'e52d8ac1-8970-4cf0-9aa0-795f616090d0'}, 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '086dffa8-4128-4c55-89ad-f4a779ee7ea0', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.406 233728 WARNING nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.414 233728 DEBUG nova.virt.libvirt.host [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.416 233728 DEBUG nova.virt.libvirt.host [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.420 233728 DEBUG nova.virt.libvirt.host [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.421 233728 DEBUG nova.virt.libvirt.host [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.423 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.423 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.424 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.424 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.425 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.425 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.425 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.425 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.426 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.426 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.426 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.426 233728 DEBUG nova.virt.hardware [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.457 233728 DEBUG nova.storage.rbd_utils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.465 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.491 233728 DEBUG nova.virt.libvirt.vif [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:22Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.492 233728 DEBUG nova.network.os_vif_util [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.493 233728 DEBUG nova.network.os_vif_util [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.495 233728 DEBUG nova.objects.instance [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lazy-loading 'pci_devices' on Instance uuid 7067462a-37a6-458e-b96c-76adcea5fdfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:27.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.516 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <uuid>7067462a-37a6-458e-b96c-76adcea5fdfa</uuid>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <name>instance-00000014</name>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:name>tempest-LiveMigrationTest-server-1301305546</nova:name>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:54:26</nova:creationTime>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:user uuid="85f5548e01234fe4ae9b88e998e943f8">tempest-LiveMigrationTest-814240379-project-member</nova:user>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:project uuid="1963a097b7694450aa0d7c30b27b38ac">tempest-LiveMigrationTest-814240379</nova:project>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:port uuid="5d7fa9ca-8f51-4047-a121-6c4534fc5ae6">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="serial">7067462a-37a6-458e-b96c-76adcea5fdfa</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="uuid">7067462a-37a6-458e-b96c-76adcea5fdfa</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7067462a-37a6-458e-b96c-76adcea5fdfa_disk.config">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-3ef07b78-0409-49cf-a941-8a19b02dd939">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <serial>3ef07b78-0409-49cf-a941-8a19b02dd939</serial>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:17:bf:87"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <target dev="tap5d7fa9ca-8f"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/console.log" append="off"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:54:27 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:54:27 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.518 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Preparing to wait for external event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.519 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.519 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.519 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.520 233728 DEBUG nova.virt.libvirt.vif [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:22Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.521 233728 DEBUG nova.network.os_vif_util [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.521 233728 DEBUG nova.network.os_vif_util [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.522 233728 DEBUG os_vif [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.523 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.523 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.524 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.528 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.528 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d7fa9ca-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.529 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d7fa9ca-8f, col_values=(('external_ids', {'iface-id': '5d7fa9ca-8f51-4047-a121-6c4534fc5ae6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:bf:87', 'vm-uuid': '7067462a-37a6-458e-b96c-76adcea5fdfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:27 np0005539552 NetworkManager[48926]: <info>  [1764402867.5319] manager: (tap5d7fa9ca-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.533 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.538 233728 INFO os_vif [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f')#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.583 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.583 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.584 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] No VIF found with MAC fa:16:3e:17:bf:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.584 233728 INFO nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Using config drive#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.634 233728 DEBUG nova.storage.rbd_utils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] rbd image 7067462a-37a6-458e-b96c-76adcea5fdfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4116902197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.910 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.931 233728 DEBUG nova.virt.libvirt.vif [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:22Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.932 233728 DEBUG nova.network.os_vif_util [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.932 233728 DEBUG nova.network.os_vif_util [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.933 233728 DEBUG nova.objects.instance [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lazy-loading 'pci_devices' on Instance uuid 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.951 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <uuid>56f3f72f-7db4-47c8-a4c3-20b2acc58aa9</uuid>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <name>instance-00000013</name>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-178880762</nova:name>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:54:27</nova:creationTime>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:user uuid="b8f5b14bc98a47f29238140d1d3f1220">tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member</nova:user>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:project uuid="f91d373d1ef64146866ef08735a75efa">tempest-LiveAutoBlockMigrationV225Test-1482931553</nova:project>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <nova:port uuid="d1330295-51bc-4e64-a620-b63a6d8777fb">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="serial">56f3f72f-7db4-47c8-a4c3-20b2acc58aa9</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="uuid">56f3f72f-7db4-47c8-a4c3-20b2acc58aa9</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_disk.config">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-e52d8ac1-8970-4cf0-9aa0-795f616090d0">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <serial>e52d8ac1-8970-4cf0-9aa0-795f616090d0</serial>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:c2:bc:90"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <target dev="tapd1330295-51"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/console.log" append="off"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:54:27 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:54:27 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:54:27 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:54:27 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.952 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Preparing to wait for external event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.952 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.952 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.953 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.953 233728 DEBUG nova.virt.libvirt.vif [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:22Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.953 233728 DEBUG nova.network.os_vif_util [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.954 233728 DEBUG nova.network.os_vif_util [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.954 233728 DEBUG os_vif [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.955 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.955 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.958 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.958 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1330295-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.958 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1330295-51, col_values=(('external_ids', {'iface-id': 'd1330295-51bc-4e64-a620-b63a6d8777fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:bc:90', 'vm-uuid': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:27 np0005539552 NetworkManager[48926]: <info>  [1764402867.9611] manager: (tapd1330295-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:27 np0005539552 nova_compute[233724]: 2025-11-29 07:54:27.967 233728 INFO os_vif [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51')#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.020 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.021 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.021 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] No VIF found with MAC fa:16:3e:c2:bc:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.021 233728 INFO nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Using config drive#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.048 233728 DEBUG nova.storage.rbd_utils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.194 233728 INFO nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating config drive at /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/disk.config#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.202 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0xklfpd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.346 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0xklfpd" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.378 233728 DEBUG nova.storage.rbd_utils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] rbd image 7067462a-37a6-458e-b96c-76adcea5fdfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.383 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/disk.config 7067462a-37a6-458e-b96c-76adcea5fdfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.686 233728 INFO nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating config drive at /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/disk.config#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.691 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv4umtho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:28.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.746 233728 DEBUG nova.network.neutron [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updated VIF entry in instance network info cache for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.747 233728 DEBUG nova.network.neutron [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.761 233728 DEBUG oslo_concurrency.lockutils [req-740e9131-3dcd-4b1f-a9f2-33e8de8d7ea5 req-f8d53a5b-98cb-4b41-9cb8-8b57b8b31449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.814 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv4umtho" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.845 233728 DEBUG nova.storage.rbd_utils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] rbd image 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:28 np0005539552 nova_compute[233724]: 2025-11-29 07:54:28.849 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/disk.config 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:29 np0005539552 podman[243704]: 2025-11-29 07:54:29.034696912 +0000 UTC m=+0.118806883 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.047 233728 DEBUG nova.network.neutron [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updated VIF entry in instance network info cache for port d1330295-51bc-4e64-a620-b63a6d8777fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.047 233728 DEBUG nova.network.neutron [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.068 233728 DEBUG oslo_concurrency.lockutils [req-2d59d400-91ca-4e5e-b2d5-5b4631b532dc req-76953bea-37d8-49f4-a8b4-537d477b10c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.330 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.453 233728 DEBUG oslo_concurrency.processutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/disk.config 7067462a-37a6-458e-b96c-76adcea5fdfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.454 233728 INFO nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deleting local config drive /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:29 np0005539552 NetworkManager[48926]: <info>  [1764402869.5065] manager: (tap5d7fa9ca-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 02:54:29 np0005539552 kernel: tap5d7fa9ca-8f: entered promiscuous mode
Nov 29 02:54:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:29.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.511 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:29Z|00072|binding|INFO|Claiming lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for this chassis.
Nov 29 02:54:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:29Z|00073|binding|INFO|5d7fa9ca-8f51-4047-a121-6c4534fc5ae6: Claiming fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.528 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:bf:87 10.100.0.5'], port_security=['fa:16:3e:17:bf:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.529 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d bound to our chassis#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.532 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d#033[00m
Nov 29 02:54:29 np0005539552 systemd-udevd[243765]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:29 np0005539552 systemd-machined[196379]: New machine qemu-4-instance-00000014.
Nov 29 02:54:29 np0005539552 NetworkManager[48926]: <info>  [1764402869.5471] device (tap5d7fa9ca-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:29 np0005539552 NetworkManager[48926]: <info>  [1764402869.5478] device (tap5d7fa9ca-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.545 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8c787b-0c4e-4555-8cdb-6c038bfc58e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.547 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a06a21a-b1 in ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.550 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a06a21a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.550 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b79086a0-25b0-47ce-9fe2-93b0d33bd4bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.551 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9e95922d-631d-4e81-96f4-f155af33ae36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 systemd[1]: Started Virtual Machine qemu-4-instance-00000014.
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.564 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[046d6328-d9aa-4047-a8de-81886b36dc38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.591 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[12fe2d09-feeb-48e5-ae21-e18915ca2233]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.600 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:29Z|00074|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 ovn-installed in OVS
Nov 29 02:54:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:29Z|00075|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 up in Southbound
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.617 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1b72379a-8f68-4465-86e9-397bd3144e8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 NetworkManager[48926]: <info>  [1764402869.6236] manager: (tap7a06a21a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.623 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e27c5b16-a494-48ad-912f-d01eb7a9bbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.651 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9694f803-d465-4e96-93de-04d338e08db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.653 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3722b365-9d89-4671-a4bb-d389bf02bae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 NetworkManager[48926]: <info>  [1764402869.6749] device (tap7a06a21a-b0): carrier: link connected
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.680 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c0804739-7d7b-4c3f-b4e8-b4c3e6335afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.694 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b57c036d-cc1e-485c-b0ef-169ebfe544df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591536, 'reachable_time': 28737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243803, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.706 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf92891-900e-4563-be06-90a3191f5dcf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:44a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591536, 'tstamp': 591536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243804, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.720 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bf399bcc-008a-4b23-a09c-cb6e30845947]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591536, 'reachable_time': 28737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243805, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.747 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b113d97-f2bf-4e0c-a3d0-bb5c875f682f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.806 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a59c439e-d16b-42c8-860d-9371f2621aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.807 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.807 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.807 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a06a21a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.809 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 NetworkManager[48926]: <info>  [1764402869.8102] manager: (tap7a06a21a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 02:54:29 np0005539552 kernel: tap7a06a21a-b0: entered promiscuous mode
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.814 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a06a21a-b0, col_values=(('external_ids', {'iface-id': '2b822f56-587d-4c36-9c9a-d54b62b2616c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:29Z|00076|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 nova_compute[233724]: 2025-11-29 07:54:29.833 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.834 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.835 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a3f369-93db-4468-9e60-7250035115f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.835 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-7a06a21a-ba04-4a14-8d62-c931cbbf124d
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 7a06a21a-ba04-4a14-8d62-c931cbbf124d
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:54:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:29.836 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'env', 'PROCESS_TAG=haproxy-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a06a21a-ba04-4a14-8d62-c931cbbf124d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:54:30 np0005539552 podman[243840]: 2025-11-29 07:54:30.214997752 +0000 UTC m=+0.052147380 container create 8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 02:54:30 np0005539552 systemd[1]: Started libpod-conmon-8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475.scope.
Nov 29 02:54:30 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:54:30 np0005539552 podman[243840]: 2025-11-29 07:54:30.188512541 +0000 UTC m=+0.025662179 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:54:30 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cdbcda98eb1668e766dab3d542860bca283269a96fb1edd4615e9a226cf78a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:30 np0005539552 podman[243840]: 2025-11-29 07:54:30.293408225 +0000 UTC m=+0.130557863 container init 8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:54:30 np0005539552 podman[243840]: 2025-11-29 07:54:30.299506391 +0000 UTC m=+0.136656009 container start 8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:30 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [NOTICE]   (243859) : New worker (243861) forked
Nov 29 02:54:30 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [NOTICE]   (243859) : Loading success.
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.384 233728 DEBUG nova.compute.manager [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.385 233728 DEBUG oslo_concurrency.lockutils [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.385 233728 DEBUG oslo_concurrency.lockutils [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.386 233728 DEBUG oslo_concurrency.lockutils [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.387 233728 DEBUG nova.compute.manager [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Processing event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.387 233728 DEBUG nova.compute.manager [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.388 233728 DEBUG oslo_concurrency.lockutils [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.388 233728 DEBUG oslo_concurrency.lockutils [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.389 233728 DEBUG oslo_concurrency.lockutils [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.390 233728 DEBUG nova.compute.manager [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:30 np0005539552 nova_compute[233724]: 2025-11-29 07:54:30.390 233728 WARNING nova.compute.manager [req-1c8bf684-2b46-48f8-ad00-74d42999719b req-69de117d-bf0a-4f76-8635-d229ebd155fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:54:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:30.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.403 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.403 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402871.4020562, 7067462a-37a6-458e-b96c-76adcea5fdfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.404 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.411 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.415 233728 INFO nova.virt.libvirt.driver [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance spawned successfully.#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.415 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.430 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.437 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.441 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.441 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.442 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.442 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.443 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.443 233728 DEBUG nova.virt.libvirt.driver [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.472 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.472 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402871.4062133, 7067462a-37a6-458e-b96c-76adcea5fdfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.473 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.509 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.515 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402871.4096415, 7067462a-37a6-458e-b96c-76adcea5fdfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.515 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.519 233728 INFO nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 7.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.519 233728 DEBUG nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.541 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.544 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.569 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.577 233728 INFO nova.compute.manager [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 11.04 seconds to build instance.#033[00m
Nov 29 02:54:31 np0005539552 nova_compute[233724]: 2025-11-29 07:54:31.592 233728 DEBUG oslo_concurrency.lockutils [None req-935621a2-bfa1-426c-8af9-8f59f85f699e 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.221 233728 DEBUG oslo_concurrency.processutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/disk.config 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.222 233728 INFO nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deleting local config drive /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:32 np0005539552 kernel: tapd1330295-51: entered promiscuous mode
Nov 29 02:54:32 np0005539552 NetworkManager[48926]: <info>  [1764402872.2724] manager: (tapd1330295-51): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 02:54:32 np0005539552 systemd-udevd[243784]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.275 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.280 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:32Z|00077|binding|INFO|Claiming lport d1330295-51bc-4e64-a620-b63a6d8777fb for this chassis.
Nov 29 02:54:32 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:32Z|00078|binding|INFO|d1330295-51bc-4e64-a620-b63a6d8777fb: Claiming fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.287 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:bc:90 10.100.0.12'], port_security=['fa:16:3e:c2:bc:90 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d1330295-51bc-4e64-a620-b63a6d8777fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:32 np0005539552 NetworkManager[48926]: <info>  [1764402872.2902] device (tapd1330295-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:32 np0005539552 NetworkManager[48926]: <info>  [1764402872.2908] device (tapd1330295-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.293 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d1330295-51bc-4e64-a620-b63a6d8777fb in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 bound to our chassis#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.298 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.315 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[16224af7-fbd9-4f95-beee-afc6aa2cb271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 systemd-machined[196379]: New machine qemu-5-instance-00000013.
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.316 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad69a0f4-01 in ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.323 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad69a0f4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.323 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[af37cb6f-0551-41f0-b956-55fcfeb80b94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.324 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[38c5e1f6-0cef-4964-9cf9-18b9b9ba173b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.339 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f2477c81-c173-4af4-b90d-b86100f2672a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 systemd[1]: Started Virtual Machine qemu-5-instance-00000013.
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.345 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:32Z|00079|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb ovn-installed in OVS
Nov 29 02:54:32 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:32Z|00080|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb up in Southbound
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.379 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d1623b9f-0f50-45d8-a355-299dd4c9f862]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.423 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8e70882d-6643-43ee-aeaa-a40895d5c17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 NetworkManager[48926]: <info>  [1764402872.4337] manager: (tapad69a0f4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.432 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[25f3fc75-3573-40e6-a511-12893b1a907c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.478 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8891018d-565f-4291-a1b9-dc867f260050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.482 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[69eb0d88-bacc-4b95-a57c-8b38808f4c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 NetworkManager[48926]: <info>  [1764402872.5005] device (tapad69a0f4-00): carrier: link connected
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.504 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e88545a4-edd0-4895-897a-227842f4bdd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.521 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fafea0-6569-456f-9e07-d018e21f3363]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591818, 'reachable_time': 17073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243946, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.538 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[55304702-c2e8-4e62-9f76-b27c3d609bae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:a12d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591818, 'tstamp': 591818}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243948, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.554 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8be928f2-82fc-431e-af66-d34827a11877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591818, 'reachable_time': 17073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243949, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.582 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cc29e6c5-3d9e-49c8-afcd-45aae9e481fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.638 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e06f820d-8fb0-4960-b295-738cb06f2ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.639 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.639 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.640 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad69a0f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 NetworkManager[48926]: <info>  [1764402872.6421] manager: (tapad69a0f4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 02:54:32 np0005539552 kernel: tapad69a0f4-00: entered promiscuous mode
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.644 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.646 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad69a0f4-00, col_values=(('external_ids', {'iface-id': '7ffec560-b868-40db-af88-b0deaaa81f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.647 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:32Z|00081|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.661 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.663 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.663 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7f183dfb-7828-4d0e-959e-ab0611eb02a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.664 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:54:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:32.665 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'env', 'PROCESS_TAG=haproxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:54:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:32.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:32 np0005539552 nova_compute[233724]: 2025-11-29 07:54:32.960 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:33 np0005539552 podman[243982]: 2025-11-29 07:54:33.023582238 +0000 UTC m=+0.055472970 container create 09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:54:33 np0005539552 systemd[1]: Started libpod-conmon-09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8.scope.
Nov 29 02:54:33 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:54:33 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5092303197f7d1c1fc3d5955098f6210b0b7d73e1fb7fb3f45d289c2a962556/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:33 np0005539552 podman[243982]: 2025-11-29 07:54:32.992568154 +0000 UTC m=+0.024458906 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:54:33 np0005539552 podman[243982]: 2025-11-29 07:54:33.094555958 +0000 UTC m=+0.126446690 container init 09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:54:33 np0005539552 podman[243982]: 2025-11-29 07:54:33.100701596 +0000 UTC m=+0.132592328 container start 09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:54:33 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [NOTICE]   (244020) : New worker (244022) forked
Nov 29 02:54:33 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [NOTICE]   (244020) : Loading success.
Nov 29 02:54:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:33.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.537 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402873.5369976, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.538 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.556 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.560 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402873.5371182, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.560 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.575 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.577 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:33 np0005539552 nova_compute[233724]: 2025-11-29 07:54:33.596 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:34 np0005539552 nova_compute[233724]: 2025-11-29 07:54:34.334 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:54:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:34.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:54:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:36.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:37.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:37 np0005539552 nova_compute[233724]: 2025-11-29 07:54:37.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.218 233728 DEBUG nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.218 233728 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.219 233728 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.219 233728 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.219 233728 DEBUG nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Processing event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.219 233728 DEBUG nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.220 233728 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.220 233728 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.220 233728 DEBUG oslo_concurrency.lockutils [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.220 233728 DEBUG nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.221 233728 WARNING nova.compute.manager [req-eb36feae-a490-4f39-9500-55e13cfee3c1 req-8e706037-aba5-42b7-aee0-02a0917ea73f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.221 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.225 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402878.2252517, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.225 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.228 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.232 233728 INFO nova.virt.libvirt.driver [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance spawned successfully.#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.232 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.243 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.250 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.252 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.252 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.253 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.253 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.253 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.254 233728 DEBUG nova.virt.libvirt.driver [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.279 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.310 233728 INFO nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 13.82 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.311 233728 DEBUG nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.369 233728 INFO nova.compute.manager [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 17.84 seconds to build instance.#033[00m
Nov 29 02:54:38 np0005539552 nova_compute[233724]: 2025-11-29 07:54:38.385 233728 DEBUG oslo_concurrency.lockutils [None req-aca10db0-e13d-437b-9011-d1814a72abe1 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:38.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:39 np0005539552 nova_compute[233724]: 2025-11-29 07:54:39.340 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:39.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:39 np0005539552 nova_compute[233724]: 2025-11-29 07:54:39.995 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Check if temp file /var/lib/nova/instances/tmpny0czsgk exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:54:39 np0005539552 nova_compute[233724]: 2025-11-29 07:54:39.996 233728 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:54:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:40.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:41.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:42.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:42 np0005539552 nova_compute[233724]: 2025-11-29 07:54:42.964 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:43.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:44 np0005539552 nova_compute[233724]: 2025-11-29 07:54:44.102 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Check if temp file /var/lib/nova/instances/tmpg_cjv671 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:54:44 np0005539552 nova_compute[233724]: 2025-11-29 07:54:44.103 233728 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:54:44 np0005539552 nova_compute[233724]: 2025-11-29 07:54:44.345 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:44.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3412927210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:45.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:45 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:45Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:54:45 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:45Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.019 233728 DEBUG nova.compute.manager [req-f1fbd2db-318d-47f3-a2e1-a40238c25159 req-ad4b6eff-8e82-46b4-93c8-c7df91d4f005 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.020 233728 DEBUG oslo_concurrency.lockutils [req-f1fbd2db-318d-47f3-a2e1-a40238c25159 req-ad4b6eff-8e82-46b4-93c8-c7df91d4f005 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.020 233728 DEBUG oslo_concurrency.lockutils [req-f1fbd2db-318d-47f3-a2e1-a40238c25159 req-ad4b6eff-8e82-46b4-93c8-c7df91d4f005 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.020 233728 DEBUG oslo_concurrency.lockutils [req-f1fbd2db-318d-47f3-a2e1-a40238c25159 req-ad4b6eff-8e82-46b4-93c8-c7df91d4f005 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.020 233728 DEBUG nova.compute.manager [req-f1fbd2db-318d-47f3-a2e1-a40238c25159 req-ad4b6eff-8e82-46b4-93c8-c7df91d4f005 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.020 233728 DEBUG nova.compute.manager [req-f1fbd2db-318d-47f3-a2e1-a40238c25159 req-ad4b6eff-8e82-46b4-93c8-c7df91d4f005 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.215 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.389 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Triggering sync for uuid 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.390 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Triggering sync for uuid 7067462a-37a6-458e-b96c-76adcea5fdfa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.390 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.390 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.390 233728 INFO nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.391 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.391 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.391 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.391 233728 INFO nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:54:46 np0005539552 nova_compute[233724]: 2025-11-29 07:54:46.391 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:46.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.456 233728 INFO nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 6.12 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.457 233728 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.480 233728 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(7828b189-8959-4bb4-9dc2-995f9b7d9cbd),old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='73064fe3-3d3a-4388-bf17-21bd966882ad'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.483 233728 DEBUG nova.objects.instance [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lazy-loading 'migration_context' on Instance uuid 7067462a-37a6-458e-b96c-76adcea5fdfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.484 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.486 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.488 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.515 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Find same serial number: pos=1, serial=3ef07b78-0409-49cf-a941-8a19b02dd939 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.516 233728 DEBUG nova.virt.libvirt.vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:31Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.516 233728 DEBUG nova.network.os_vif_util [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.517 233728 DEBUG nova.network.os_vif_util [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.518 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:54:47 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:17:bf:87"/>
Nov 29 02:54:47 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 02:54:47 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:47 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 02:54:47 np0005539552 nova_compute[233724]:  <target dev="tap5d7fa9ca-8f"/>
Nov 29 02:54:47 np0005539552 nova_compute[233724]: </interface>
Nov 29 02:54:47 np0005539552 nova_compute[233724]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.518 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:54:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:54:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:47.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.992 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:54:47 np0005539552 nova_compute[233724]: 2025-11-29 07:54:47.993 233728 INFO nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.066 233728 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.167 233728 DEBUG nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.168 233728 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.168 233728 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.168 233728 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.168 233728 DEBUG nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.169 233728 WARNING nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.169 233728 DEBUG nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-changed-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.169 233728 DEBUG nova.compute.manager [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Refreshing instance network info cache due to event network-changed-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.169 233728 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.169 233728 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.170 233728 DEBUG nova.network.neutron [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Refreshing network info cache for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.569 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:54:48 np0005539552 nova_compute[233724]: 2025-11-29 07:54:48.570 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:54:48 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 02:54:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:48.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.073 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.074 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.346 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.475 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402889.4751222, 7067462a-37a6-458e-b96c-76adcea5fdfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.476 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.500 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.505 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.524 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:54:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:49.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.578 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:54:49 np0005539552 nova_compute[233724]: 2025-11-29 07:54:49.579 233728 DEBUG nova.virt.libvirt.migration [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.007 233728 DEBUG nova.network.neutron [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updated VIF entry in instance network info cache for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.008 233728 DEBUG nova.network.neutron [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.049 233728 DEBUG oslo_concurrency.lockutils [req-6982c198-8403-4ce7-a63a-859bd80c174b req-8b82560d-4987-49c4-8076-30fb8906b9ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:50 np0005539552 kernel: tap5d7fa9ca-8f (unregistering): left promiscuous mode
Nov 29 02:54:50 np0005539552 NetworkManager[48926]: <info>  [1764402890.0674] device (tap5d7fa9ca-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:54:50 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:50Z|00082|binding|INFO|Releasing lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 from this chassis (sb_readonly=0)
Nov 29 02:54:50 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:50Z|00083|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 down in Southbound
Nov 29 02:54:50 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:50Z|00084|binding|INFO|Removing iface tap5d7fa9ca-8f ovn-installed in OVS
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.084 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.086 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.093 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:bf:87 10.100.0.5'], port_security=['fa:16:3e:17:bf:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.095 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d unbound from our chassis#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.097 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.098 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f7374168-8bf5-49b0-b341-ca0af61e0e3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.099 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace which is not needed anymore#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.108 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539552 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 02:54:50 np0005539552 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000014.scope: Consumed 14.981s CPU time.
Nov 29 02:54:50 np0005539552 systemd-machined[196379]: Machine qemu-4-instance-00000014 terminated.
Nov 29 02:54:50 np0005539552 virtqemud[233098]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-3ef07b78-0409-49cf-a941-8a19b02dd939: No such file or directory
Nov 29 02:54:50 np0005539552 virtqemud[233098]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-3ef07b78-0409-49cf-a941-8a19b02dd939: No such file or directory
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.234 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.235 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.235 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.240 233728 DEBUG nova.virt.libvirt.guest [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.240 233728 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration operation has completed#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.241 233728 INFO nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] _post_live_migration() is started..#033[00m
Nov 29 02:54:50 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [NOTICE]   (243859) : haproxy version is 2.8.14-c23fe91
Nov 29 02:54:50 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [NOTICE]   (243859) : path to executable is /usr/sbin/haproxy
Nov 29 02:54:50 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [WARNING]  (243859) : Exiting Master process...
Nov 29 02:54:50 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [ALERT]    (243859) : Current worker (243861) exited with code 143 (Terminated)
Nov 29 02:54:50 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[243855]: [WARNING]  (243859) : All workers exited. Exiting... (0)
Nov 29 02:54:50 np0005539552 systemd[1]: libpod-8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475.scope: Deactivated successfully.
Nov 29 02:54:50 np0005539552 podman[244141]: 2025-11-29 07:54:50.258907404 +0000 UTC m=+0.064767923 container died 8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 02:54:50 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475-userdata-shm.mount: Deactivated successfully.
Nov 29 02:54:50 np0005539552 systemd[1]: var-lib-containers-storage-overlay-77cdbcda98eb1668e766dab3d542860bca283269a96fb1edd4615e9a226cf78a-merged.mount: Deactivated successfully.
Nov 29 02:54:50 np0005539552 podman[244141]: 2025-11-29 07:54:50.300479425 +0000 UTC m=+0.106339904 container cleanup 8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:50 np0005539552 systemd[1]: libpod-conmon-8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475.scope: Deactivated successfully.
Nov 29 02:54:50 np0005539552 podman[244181]: 2025-11-29 07:54:50.357770743 +0000 UTC m=+0.037439689 container remove 8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.366 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e6186c-7d4b-414c-9605-b1aa41023154]: (4, ('Sat Nov 29 07:54:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475)\n8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475\nSat Nov 29 07:54:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475)\n8980b957322a364fd600a7a2d73f418d8e92e2591bb942732b29da58044df475\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.368 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[48353313-129f-4bc8-a09f-8e21f5da89ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.369 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.383 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539552 kernel: tap7a06a21a-b0: left promiscuous mode
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.393 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0dae2b69-a674-4b81-8642-fe6c9ffa846e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.399 233728 DEBUG nova.compute.manager [req-dcf9ac23-f598-49ac-83e7-5b4b1ab3944b req-16070ef5-bee7-4142-adcc-1539f80537ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.402 233728 DEBUG oslo_concurrency.lockutils [req-dcf9ac23-f598-49ac-83e7-5b4b1ab3944b req-16070ef5-bee7-4142-adcc-1539f80537ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.403 233728 DEBUG oslo_concurrency.lockutils [req-dcf9ac23-f598-49ac-83e7-5b4b1ab3944b req-16070ef5-bee7-4142-adcc-1539f80537ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.403 233728 DEBUG oslo_concurrency.lockutils [req-dcf9ac23-f598-49ac-83e7-5b4b1ab3944b req-16070ef5-bee7-4142-adcc-1539f80537ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.403 233728 DEBUG nova.compute.manager [req-dcf9ac23-f598-49ac-83e7-5b4b1ab3944b req-16070ef5-bee7-4142-adcc-1539f80537ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.403 233728 DEBUG nova.compute.manager [req-dcf9ac23-f598-49ac-83e7-5b4b1ab3944b req-16070ef5-bee7-4142-adcc-1539f80537ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.409 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3178b2-f477-44a9-87a3-2456d4690e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.410 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd2ee0e-f56e-478a-a28b-376663861b1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.428 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[289ceea3-e459-44bd-8ef3-a0c46111a052]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591530, 'reachable_time': 42898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244199, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 systemd[1]: run-netns-ovnmeta\x2d7a06a21a\x2dba04\x2d4a14\x2d8d62\x2dc931cbbf124d.mount: Deactivated successfully.
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.433 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.433 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a736a1-e342-4e2b-b331-a9a41e16ea6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:50 np0005539552 nova_compute[233724]: 2025-11-29 07:54:50.705 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.705 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:50.706 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:50.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.233 233728 DEBUG nova.compute.manager [req-3f8bb74b-4cd7-48f5-9603-3d070521734e req-5c125cf5-127a-4ed5-b231-0a54dff37295 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.233 233728 DEBUG oslo_concurrency.lockutils [req-3f8bb74b-4cd7-48f5-9603-3d070521734e req-5c125cf5-127a-4ed5-b231-0a54dff37295 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.233 233728 DEBUG oslo_concurrency.lockutils [req-3f8bb74b-4cd7-48f5-9603-3d070521734e req-5c125cf5-127a-4ed5-b231-0a54dff37295 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.234 233728 DEBUG oslo_concurrency.lockutils [req-3f8bb74b-4cd7-48f5-9603-3d070521734e req-5c125cf5-127a-4ed5-b231-0a54dff37295 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.234 233728 DEBUG nova.compute.manager [req-3f8bb74b-4cd7-48f5-9603-3d070521734e req-5c125cf5-127a-4ed5-b231-0a54dff37295 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.234 233728 DEBUG nova.compute.manager [req-3f8bb74b-4cd7-48f5-9603-3d070521734e req-5c125cf5-127a-4ed5-b231-0a54dff37295 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:54:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:51.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.541 233728 DEBUG nova.network.neutron [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Activated binding for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.542 233728 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.545 233728 DEBUG nova.virt.libvirt.vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:39Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.547 233728 DEBUG nova.network.os_vif_util [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.549 233728 DEBUG nova.network.os_vif_util [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.551 233728 DEBUG os_vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.559 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d7fa9ca-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.562 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.566 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.568 233728 INFO os_vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f')#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.568 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.569 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.569 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.570 233728 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.570 233728 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deleting instance files /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa_del#033[00m
Nov 29 02:54:51 np0005539552 nova_compute[233724]: 2025-11-29 07:54:51.570 233728 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deletion of /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa_del complete#033[00m
Nov 29 02:54:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.048 233728 DEBUG nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.048 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.049 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.049 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.049 233728 DEBUG nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.049 233728 WARNING nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.050 233728 DEBUG nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.050 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.050 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.050 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.050 233728 DEBUG nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.051 233728 WARNING nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.051 233728 DEBUG nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.051 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.051 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.052 233728 DEBUG oslo_concurrency.lockutils [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.052 233728 DEBUG nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:53 np0005539552 nova_compute[233724]: 2025-11-29 07:54:53.052 233728 WARNING nova.compute.manager [req-6e9c13f3-0ede-4804-8d7c-7cfa169175cc req-0498a547-cf66-4ad7-86de-28a31e1cff9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:54:53 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:53Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:54:53 np0005539552 ovn_controller[133798]: 2025-11-29T07:54:53Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:54:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:54 np0005539552 nova_compute[233724]: 2025-11-29 07:54:54.348 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.405 233728 DEBUG nova.compute.manager [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.405 233728 DEBUG oslo_concurrency.lockutils [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.405 233728 DEBUG oslo_concurrency.lockutils [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.406 233728 DEBUG oslo_concurrency.lockutils [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.406 233728 DEBUG nova.compute.manager [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.406 233728 WARNING nova.compute.manager [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:54:56 np0005539552 nova_compute[233724]: 2025-11-29 07:54:56.563 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:54:56.707 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.365 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.366 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.366 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.387 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.388 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.388 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.388 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.389 233728 DEBUG oslo_concurrency.processutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3384893100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.828 233728 DEBUG oslo_concurrency.processutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.908 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:54:57 np0005539552 nova_compute[233724]: 2025-11-29 07:54:57.909 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:54:57 np0005539552 podman[244361]: 2025-11-29 07:54:57.936626701 +0000 UTC m=+0.060803805 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:54:57 np0005539552 podman[244362]: 2025-11-29 07:54:57.942997635 +0000 UTC m=+0.062335257 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.043 233728 WARNING nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.044 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4635MB free_disk=20.764312744140625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.044 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.044 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.519 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration for instance 7067462a-37a6-458e-b96c-76adcea5fdfa refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.552 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.553 233728 INFO nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating resource usage from migration afbc71a1-4ace-4109-aba9-8332d00626a1#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.579 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration 7828b189-8959-4bb4-9dc2-995f9b7d9cbd is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.580 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration afbc71a1-4ace-4109-aba9-8332d00626a1 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.580 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.581 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:58 np0005539552 nova_compute[233724]: 2025-11-29 07:54:58.633 233728 DEBUG oslo_concurrency.processutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:54:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:54:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1351355672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.061 233728 DEBUG oslo_concurrency.processutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.067 233728 DEBUG nova.compute.provider_tree [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.084 233728 DEBUG nova.scheduler.client.report [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.106 233728 DEBUG nova.compute.resource_tracker [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.106 233728 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.118 233728 INFO nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.265 233728 INFO nova.scheduler.client.report [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Deleted allocation for migration 7828b189-8959-4bb4-9dc2-995f9b7d9cbd#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.266 233728 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.350 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:54:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:54:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.569 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating tmpfile /var/lib/nova/instances/tmpto9guicl to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:54:59 np0005539552 nova_compute[233724]: 2025-11-29 07:54:59.689 233728 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:54:59 np0005539552 podman[244423]: 2025-11-29 07:54:59.715778962 +0000 UTC m=+0.138941221 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:55:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:01 np0005539552 nova_compute[233724]: 2025-11-29 07:55:01.277 233728 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:55:01 np0005539552 nova_compute[233724]: 2025-11-29 07:55:01.310 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:01 np0005539552 nova_compute[233724]: 2025-11-29 07:55:01.311 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:01 np0005539552 nova_compute[233724]: 2025-11-29 07:55:01.311 233728 DEBUG nova.network.neutron [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:01 np0005539552 nova_compute[233724]: 2025-11-29 07:55:01.566 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584557205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:02.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539552 nova_compute[233724]: 2025-11-29 07:55:02.989 233728 DEBUG nova.network.neutron [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.005 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.006 233728 DEBUG os_brick.utils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.007 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.020 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.020 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e39ddd-5af8-43b4-970c-500b64d7e482]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.022 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.031 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.031 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[483139f7-049f-46eb-b453-ca7c853e3f94]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.033 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.043 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.043 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3b4e7c-ffaf-441c-b461-b801c78f160a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.044 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e22d2c83-2957-489e-a285-a9854a393369]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.045 233728 DEBUG oslo_concurrency.processutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.064 233728 DEBUG oslo_concurrency.processutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.066 233728 DEBUG os_brick.initiator.connectors.lightos [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.067 233728 DEBUG os_brick.initiator.connectors.lightos [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.067 233728 DEBUG os_brick.initiator.connectors.lightos [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.067 233728 DEBUG os_brick.utils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:55:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3016388766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.906 233728 DEBUG nova.compute.manager [req-6254cb37-979d-4eac-83c3-90177fb66190 req-13556702-630a-414b-ae00-3a0c4c242c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.907 233728 DEBUG oslo_concurrency.lockutils [req-6254cb37-979d-4eac-83c3-90177fb66190 req-13556702-630a-414b-ae00-3a0c4c242c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.908 233728 DEBUG oslo_concurrency.lockutils [req-6254cb37-979d-4eac-83c3-90177fb66190 req-13556702-630a-414b-ae00-3a0c4c242c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.908 233728 DEBUG oslo_concurrency.lockutils [req-6254cb37-979d-4eac-83c3-90177fb66190 req-13556702-630a-414b-ae00-3a0c4c242c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.909 233728 DEBUG nova.compute.manager [req-6254cb37-979d-4eac-83c3-90177fb66190 req-13556702-630a-414b-ae00-3a0c4c242c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:03 np0005539552 nova_compute[233724]: 2025-11-29 07:55:03.909 233728 DEBUG nova.compute.manager [req-6254cb37-979d-4eac-83c3-90177fb66190 req-13556702-630a-414b-ae00-3a0c4c242c75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.214 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='0fdf73ae-cbdf-4670-b55b-bd9dfb3efb32'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.214 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating instance directory: /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.215 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Ensure instance console log exists: /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.215 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.218 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.219 233728 DEBUG nova.virt.libvirt.vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:56Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.219 233728 DEBUG nova.network.os_vif_util [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.220 233728 DEBUG nova.network.os_vif_util [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.220 233728 DEBUG os_vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.221 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.221 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.222 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.225 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.225 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d7fa9ca-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.226 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d7fa9ca-8f, col_values=(('external_ids', {'iface-id': '5d7fa9ca-8f51-4047-a121-6c4534fc5ae6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:bf:87', 'vm-uuid': '7067462a-37a6-458e-b96c-76adcea5fdfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.264 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539552 NetworkManager[48926]: <info>  [1764402904.2665] manager: (tap5d7fa9ca-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.267 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.271 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.272 233728 INFO os_vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f')#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.276 233728 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.276 233728 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='0fdf73ae-cbdf-4670-b55b-bd9dfb3efb32'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:55:04 np0005539552 nova_compute[233724]: 2025-11-29 07:55:04.352 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:04.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.099 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.232 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402890.2319345, 7067462a-37a6-458e-b96c-76adcea5fdfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.233 233728 INFO nova.compute.manager [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.250 233728 DEBUG nova.compute.manager [None req-3d5c6e31-e21b-4b73-b7dc-90758aa45672 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:05.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.952 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:55:05 np0005539552 nova_compute[233724]: 2025-11-29 07:55:05.952 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.013 233728 DEBUG nova.compute.manager [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.014 233728 DEBUG oslo_concurrency.lockutils [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.014 233728 DEBUG oslo_concurrency.lockutils [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.015 233728 DEBUG oslo_concurrency.lockutils [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.015 233728 DEBUG nova.compute.manager [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.016 233728 WARNING nova.compute.manager [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.016 233728 DEBUG nova.compute.manager [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-changed-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.017 233728 DEBUG nova.compute.manager [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Refreshing instance network info cache due to event network-changed-d1330295-51bc-4e64-a620-b63a6d8777fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.017 233728 DEBUG oslo_concurrency.lockutils [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.089 233728 INFO nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 6.77 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.090 233728 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.114 233728 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(afbc71a1-4ace-4109-aba9-8332d00626a1),old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='086dffa8-4128-4c55-89ad-f4a779ee7ea0'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.118 233728 DEBUG nova.objects.instance [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lazy-loading 'migration_context' on Instance uuid 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.120 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.123 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.124 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.165 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Find same serial number: pos=1, serial=e52d8ac1-8970-4cf0-9aa0-795f616090d0 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.167 233728 DEBUG nova.virt.libvirt.vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:38Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.168 233728 DEBUG nova.network.os_vif_util [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.169 233728 DEBUG nova.network.os_vif_util [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.170 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:55:06 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:c2:bc:90"/>
Nov 29 02:55:06 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 02:55:06 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:55:06 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 02:55:06 np0005539552 nova_compute[233724]:  <target dev="tapd1330295-51"/>
Nov 29 02:55:06 np0005539552 nova_compute[233724]: </interface>
Nov 29 02:55:06 np0005539552 nova_compute[233724]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.170 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:55:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.626 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.627 233728 INFO nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.707 233728 DEBUG nova.network.neutron [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.719 233728 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:55:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:06.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:06 np0005539552 nova_compute[233724]: 2025-11-29 07:55:06.976 233728 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='0fdf73ae-cbdf-4670-b55b-bd9dfb3efb32'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:55:07 np0005539552 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:55:07 np0005539552 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.222 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.222 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:55:07 np0005539552 kernel: tap5d7fa9ca-8f: entered promiscuous mode
Nov 29 02:55:07 np0005539552 NetworkManager[48926]: <info>  [1764402907.2350] manager: (tap5d7fa9ca-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 02:55:07 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:07Z|00085|binding|INFO|Claiming lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for this additional chassis.
Nov 29 02:55:07 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:07Z|00086|binding|INFO|5d7fa9ca-8f51-4047-a121-6c4534fc5ae6: Claiming fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.236 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:07 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:07Z|00087|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 ovn-installed in OVS
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.261 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.265 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:07 np0005539552 systemd-machined[196379]: New machine qemu-6-instance-00000014.
Nov 29 02:55:07 np0005539552 systemd-udevd[244595]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:07 np0005539552 NetworkManager[48926]: <info>  [1764402907.2803] device (tap5d7fa9ca-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:55:07 np0005539552 NetworkManager[48926]: <info>  [1764402907.2810] device (tap5d7fa9ca-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:55:07 np0005539552 systemd[1]: Started Virtual Machine qemu-6-instance-00000014.
Nov 29 02:55:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:07.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.726 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:07 np0005539552 nova_compute[233724]: 2025-11-29 07:55:07.727 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.004 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.006 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402908.0036838, 7067462a-37a6-458e-b96c-76adcea5fdfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.006 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.027 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.028 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.028 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.029 233728 DEBUG oslo_concurrency.lockutils [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.029 233728 DEBUG nova.network.neutron [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Refreshing network info cache for port d1330295-51bc-4e64-a620-b63a6d8777fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.031 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.032 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.032 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.218 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402908.2177937, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.219 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.235 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.236 233728 DEBUG nova.virt.libvirt.migration [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.241 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.246 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.262 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:55:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:55:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4951 writes, 26K keys, 4951 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 4951 writes, 4951 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1444 writes, 6815 keys, 1444 commit groups, 1.0 writes per commit group, ingest: 15.18 MB, 0.03 MB/s#012Interval WAL: 1444 writes, 1444 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.9      2.35              0.09        14    0.168       0      0       0.0       0.0#012  L6      1/0    9.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     55.1     47.5      2.75              0.35        13    0.211     67K   6518       0.0       0.0#012 Sum      1/0    9.32 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   5.3     29.7     31.5      5.10              0.44        27    0.189     67K   6518       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.1     29.4     29.1      1.52              0.12         8    0.190     23K   1973       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     55.1     47.5      2.75              0.35        13    0.211     67K   6518       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.2      2.31              0.09        13    0.178       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.1 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 5.1 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 13.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000133 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(734,12.67 MB,4.1677%) FilterBlock(27,189.98 KB,0.0610301%) IndexBlock(27,371.53 KB,0.11935%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:55:08 np0005539552 kernel: tapd1330295-51 (unregistering): left promiscuous mode
Nov 29 02:55:08 np0005539552 NetworkManager[48926]: <info>  [1764402908.4330] device (tapd1330295-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:08 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:08Z|00088|binding|INFO|Releasing lport d1330295-51bc-4e64-a620-b63a6d8777fb from this chassis (sb_readonly=0)
Nov 29 02:55:08 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:08Z|00089|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb down in Southbound
Nov 29 02:55:08 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:08Z|00090|binding|INFO|Removing iface tapd1330295-51 ovn-installed in OVS
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.451 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.462 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:bc:90 10.100.0.12'], port_security=['fa:16:3e:c2:bc:90 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '8', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d1330295-51bc-4e64-a620-b63a6d8777fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.463 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d1330295-51bc-4e64-a620-b63a6d8777fb in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 unbound from our chassis#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.465 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.466 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbd792a-6ff2-4336-9027-b4b34d61b5a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.468 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace which is not needed anymore#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 29 02:55:08 np0005539552 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000013.scope: Consumed 16.256s CPU time.
Nov 29 02:55:08 np0005539552 systemd-machined[196379]: Machine qemu-5-instance-00000013 terminated.
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.551 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402908.5507586, 7067462a-37a6-458e-b96c-76adcea5fdfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.552 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.568 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.571 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:08 np0005539552 virtqemud[233098]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-e52d8ac1-8970-4cf0-9aa0-795f616090d0: No such file or directory
Nov 29 02:55:08 np0005539552 virtqemud[233098]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-e52d8ac1-8970-4cf0-9aa0-795f616090d0: No such file or directory
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.590 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Nov 29 02:55:08 np0005539552 NetworkManager[48926]: <info>  [1764402908.5966] manager: (tapd1330295-51): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.598 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [NOTICE]   (244020) : haproxy version is 2.8.14-c23fe91
Nov 29 02:55:08 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [NOTICE]   (244020) : path to executable is /usr/sbin/haproxy
Nov 29 02:55:08 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [WARNING]  (244020) : Exiting Master process...
Nov 29 02:55:08 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [ALERT]    (244020) : Current worker (244022) exited with code 143 (Terminated)
Nov 29 02:55:08 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[244016]: [WARNING]  (244020) : All workers exited. Exiting... (0)
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.604 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 systemd[1]: libpod-09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8.scope: Deactivated successfully.
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.607 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.607 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.608 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:55:08 np0005539552 podman[244669]: 2025-11-29 07:55:08.613702337 +0000 UTC m=+0.044526802 container died 09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:55:08 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8-userdata-shm.mount: Deactivated successfully.
Nov 29 02:55:08 np0005539552 systemd[1]: var-lib-containers-storage-overlay-c5092303197f7d1c1fc3d5955098f6210b0b7d73e1fb7fb3f45d289c2a962556-merged.mount: Deactivated successfully.
Nov 29 02:55:08 np0005539552 podman[244669]: 2025-11-29 07:55:08.644485905 +0000 UTC m=+0.075310380 container cleanup 09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:55:08 np0005539552 systemd[1]: libpod-conmon-09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8.scope: Deactivated successfully.
Nov 29 02:55:08 np0005539552 podman[244708]: 2025-11-29 07:55:08.703974183 +0000 UTC m=+0.038248742 container remove 09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.708 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[05f2ca30-e145-4432-b0c1-23102198ef3a]: (4, ('Sat Nov 29 07:55:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8)\n09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8\nSat Nov 29 07:55:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8)\n09d7aec628c1676479af22811e894e514edfb1d5dbceeb43da33b06a953b83e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.711 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a5662ebf-32f2-4f36-b0a7-e6dbcfa8dad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.711 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.713 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 kernel: tapad69a0f4-00: left promiscuous mode
Nov 29 02:55:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:08.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.787 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.788 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6f810c-6053-4b66-a868-31b5fdfd8cef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.788 233728 DEBUG nova.virt.libvirt.guest [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9' (instance-00000013) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.788 233728 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration operation has completed#033[00m
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.788 233728 INFO nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] _post_live_migration() is started..#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.810 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[82e32ae7-a83e-437b-ba1b-cd7e7f6f53e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.811 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dd659bef-9376-45a4-ae46-ff7ffe2a4992]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.827 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b425b17e-f1eb-4dcf-9908-d5e7faadc314]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591810, 'reachable_time': 29150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244727, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.828 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:55:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:08.828 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[0855692b-aedd-414f-a0fd-80ce1e61a065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539552 systemd[1]: run-netns-ovnmeta\x2dad69a0f4\x2d0000\x2d474b\x2d9649\x2d72cf1bf9f5c1.mount: Deactivated successfully.
Nov 29 02:55:08 np0005539552 nova_compute[233724]: 2025-11-29 07:55:08.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.265 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.354 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:09.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.982 233728 DEBUG nova.network.neutron [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Activated binding for port d1330295-51bc-4e64-a620-b63a6d8777fb and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.983 233728 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.985 233728 DEBUG nova.virt.libvirt.vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:43Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.985 233728 DEBUG nova.network.os_vif_util [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.987 233728 DEBUG nova.network.os_vif_util [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.987 233728 DEBUG os_vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.991 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:09 np0005539552 nova_compute[233724]: 2025-11-29 07:55:09.992 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1330295-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.029 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.035 233728 INFO os_vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51')#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.035 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.035 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.036 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.036 233728 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.036 233728 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deleting instance files /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_del#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.037 233728 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deletion of /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_del complete#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.052 233728 DEBUG nova.network.neutron [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updated VIF entry in instance network info cache for port d1330295-51bc-4e64-a620-b63a6d8777fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.053 233728 DEBUG nova.network.neutron [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:10 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:10Z|00091|binding|INFO|Claiming lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for this chassis.
Nov 29 02:55:10 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:10Z|00092|binding|INFO|5d7fa9ca-8f51-4047-a121-6c4534fc5ae6: Claiming fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:55:10 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:10Z|00093|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 up in Southbound
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.069 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:bf:87 10.100.0.5'], port_security=['fa:16:3e:17:bf:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '21', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.069 233728 DEBUG oslo_concurrency.lockutils [req-44dca21d-7c0c-459f-869e-918d254b1f34 req-849dcdcc-5f1f-4af7-a579-75a4e74fc5ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.070 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d bound to our chassis#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.073 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.082 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa2725f-567e-4c3e-a287-50f8bb1f90a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.083 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a06a21a-b1 in ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.085 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a06a21a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.085 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[21a03610-c05f-4cc6-8f8e-5c8181770fce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.086 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cce86a50-6dd3-4a52-a270-c9c657def949]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.098 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[96f1267a-b764-438d-8ace-ab01719ecfe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.122 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[477b97e6-a99a-47e4-a1c5-3856d920cdbf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.153 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c85aa7d6-df80-4735-a847-6eeef96bb7e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.157 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[46428c64-553e-4783-b9da-e70c76124dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 NetworkManager[48926]: <info>  [1764402910.1588] manager: (tap7a06a21a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 02:55:10 np0005539552 systemd-udevd[244597]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.186 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[df9bf268-3c78-40c9-9060-542af53f8122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.189 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[05a5bd2a-83bd-4440-9547-97bc143fa826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 NetworkManager[48926]: <info>  [1764402910.2182] device (tap7a06a21a-b0): carrier: link connected
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.225 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4195ecf8-1b5c-4669-866a-802290c4f445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.248 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b88bf4e7-1fdb-45cf-a0d6-d135a6bece4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595590, 'reachable_time': 20535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244752, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.267 233728 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.268 233728 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.268 233728 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.268 233728 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.268 233728 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.268 233728 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.267 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c916ef96-6f41-4a0a-9e6d-b1bbf03c401c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:44a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595590, 'tstamp': 595590}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244753, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.289 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[72db97cd-2cbb-4700-ade7-83c0877b5926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595590, 'reachable_time': 20535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244754, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.322 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3e939a-434f-4de0-8378-e0aebb06e326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.385 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8e75cd46-6ccc-4c92-8ff4-911891749818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.387 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.387 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.387 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a06a21a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.389 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539552 NetworkManager[48926]: <info>  [1764402910.3899] manager: (tap7a06a21a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 02:55:10 np0005539552 kernel: tap7a06a21a-b0: entered promiscuous mode
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.392 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a06a21a-b0, col_values=(('external_ids', {'iface-id': '2b822f56-587d-4c36-9c9a-d54b62b2616c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.393 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:10Z|00094|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.409 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.411 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.412 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3ff530-a2e0-4a38-ae52-4dbc6c0ccd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.413 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-7a06a21a-ba04-4a14-8d62-c931cbbf124d
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 7a06a21a-ba04-4a14-8d62-c931cbbf124d
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:55:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:10.414 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'env', 'PROCESS_TAG=haproxy-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a06a21a-ba04-4a14-8d62-c931cbbf124d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:55:10 np0005539552 podman[244788]: 2025-11-29 07:55:10.75980685 +0000 UTC m=+0.043388511 container create aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 02:55:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:10 np0005539552 systemd[1]: Started libpod-conmon-aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab.scope.
Nov 29 02:55:10 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:55:10 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6378c4582b7453d21600f9d2846538164f65d7d91cd6916a23281f2e16eadb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:10 np0005539552 podman[244788]: 2025-11-29 07:55:10.737204095 +0000 UTC m=+0.020785776 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:55:10 np0005539552 podman[244788]: 2025-11-29 07:55:10.84321846 +0000 UTC m=+0.126800141 container init aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 02:55:10 np0005539552 podman[244788]: 2025-11-29 07:55:10.848880294 +0000 UTC m=+0.132461955 container start aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:55:10 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [NOTICE]   (244807) : New worker (244809) forked
Nov 29 02:55:10 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [NOTICE]   (244807) : Loading success.
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.956 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:10 np0005539552 nova_compute[233724]: 2025-11-29 07:55:10.956 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1672243692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.401 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.471 233728 INFO nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Post operation of migration started#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.478 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.478 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:11.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.615 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.616 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4635MB free_disk=20.69770050048828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.616 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.617 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.663 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration for instance 7067462a-37a6-458e-b96c-76adcea5fdfa refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.684 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating resource usage from migration afbc71a1-4ace-4109-aba9-8332d00626a1#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.684 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating resource usage from migration 2ab88ec4-0df8-4e1f-a957-0537069aa961#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.684 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Starting to track incoming migration 2ab88ec4-0df8-4e1f-a957-0537069aa961 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.733 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration afbc71a1-4ace-4109-aba9-8332d00626a1 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.763 233728 WARNING nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 7067462a-37a6-458e-b96c-76adcea5fdfa has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}.#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.763 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.764 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.784 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.905 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.905 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.928 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:55:11 np0005539552 nova_compute[233724]: 2025-11-29 07:55:11.952 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.015 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.034 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.034 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.034 233728 DEBUG nova.network.neutron [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.342 233728 DEBUG nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.342 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.345 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.346 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.346 233728 DEBUG nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.346 233728 WARNING nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.347 233728 DEBUG nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.347 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.347 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.348 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.348 233728 DEBUG nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.348 233728 WARNING nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.349 233728 DEBUG nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.349 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.349 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.349 233728 DEBUG oslo_concurrency.lockutils [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.350 233728 DEBUG nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.350 233728 WARNING nova.compute.manager [req-a0fb9cb8-87e0-47f4-822e-1ed142fb8c73 req-ede9d895-c715-4b18-a5e4-84ed4cb5d2b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3757480438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.443 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.448 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.464 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.485 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:12 np0005539552 nova_compute[233724]: 2025-11-29 07:55:12.485 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:13.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:14 np0005539552 nova_compute[233724]: 2025-11-29 07:55:14.357 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:14.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.066 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:15.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.783 233728 DEBUG nova.network.neutron [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.806 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.827 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.828 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.828 233728 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:15 np0005539552 nova_compute[233724]: 2025-11-29 07:55:15.836 233728 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:55:15 np0005539552 virtqemud[233098]: Domain id=6 name='instance-00000014' uuid=7067462a-37a6-458e-b96c-76adcea5fdfa is tainted: custom-monitor
Nov 29 02:55:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.778 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.779 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.780 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.800 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.800 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.801 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.801 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.801 233728 DEBUG oslo_concurrency.processutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:16 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:55:16 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:55:16 np0005539552 nova_compute[233724]: 2025-11-29 07:55:16.847 233728 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:55:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2499876664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.246 233728 DEBUG oslo_concurrency.processutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.307 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.307 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.458 233728 WARNING nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.459 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4638MB free_disk=20.694210052490234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.459 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.459 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.497 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration for instance 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.497 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration for instance 7067462a-37a6-458e-b96c-76adcea5fdfa refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.514 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.538 233728 INFO nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating resource usage from migration 2ab88ec4-0df8-4e1f-a957-0537069aa961#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.538 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Starting to track incoming migration 2ab88ec4-0df8-4e1f-a957-0537069aa961 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.562 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration afbc71a1-4ace-4109-aba9-8332d00626a1 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:55:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:17.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.581 233728 WARNING nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Instance 7067462a-37a6-458e-b96c-76adcea5fdfa has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}.#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.581 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.582 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.691 233728 DEBUG oslo_concurrency.processutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.852 233728 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.858 233728 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:17 np0005539552 nova_compute[233724]: 2025-11-29 07:55:17.887 233728 DEBUG nova.objects.instance [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:55:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1718832973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:18 np0005539552 nova_compute[233724]: 2025-11-29 07:55:18.097 233728 DEBUG oslo_concurrency.processutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:18 np0005539552 nova_compute[233724]: 2025-11-29 07:55:18.102 233728 DEBUG nova.compute.provider_tree [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:18.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:19 np0005539552 nova_compute[233724]: 2025-11-29 07:55:19.359 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:19 np0005539552 nova_compute[233724]: 2025-11-29 07:55:19.496 233728 DEBUG nova.scheduler.client.report [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:55:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:19.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:55:19 np0005539552 nova_compute[233724]: 2025-11-29 07:55:19.819 233728 DEBUG nova.compute.resource_tracker [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:19 np0005539552 nova_compute[233724]: 2025-11-29 07:55:19.819 233728 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:19 np0005539552 nova_compute[233724]: 2025-11-29 07:55:19.824 233728 INFO nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:55:20 np0005539552 nova_compute[233724]: 2025-11-29 07:55:20.032 233728 INFO nova.scheduler.client.report [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Deleted allocation for migration afbc71a1-4ace-4109-aba9-8332d00626a1#033[00m
Nov 29 02:55:20 np0005539552 nova_compute[233724]: 2025-11-29 07:55:20.032 233728 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:55:20 np0005539552 nova_compute[233724]: 2025-11-29 07:55:20.120 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:20.603 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:20.604 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:20.605 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:21 np0005539552 nova_compute[233724]: 2025-11-29 07:55:21.423 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating tmpfile /var/lib/nova/instances/tmp5ro22k8w to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:55:21 np0005539552 nova_compute[233724]: 2025-11-29 07:55:21.426 233728 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:55:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:21.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.580 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.581 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.581 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.582 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.582 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.583 233728 INFO nova.compute.manager [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Terminating instance#033[00m
Nov 29 02:55:22 np0005539552 nova_compute[233724]: 2025-11-29 07:55:22.585 233728 DEBUG nova.compute.manager [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:55:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:23 np0005539552 kernel: tap5d7fa9ca-8f (unregistering): left promiscuous mode
Nov 29 02:55:23 np0005539552 NetworkManager[48926]: <info>  [1764402923.2019] device (tap5d7fa9ca-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.206 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:23 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:23Z|00095|binding|INFO|Releasing lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 from this chassis (sb_readonly=0)
Nov 29 02:55:23 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:23Z|00096|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 down in Southbound
Nov 29 02:55:23 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:23Z|00097|binding|INFO|Removing iface tap5d7fa9ca-8f ovn-installed in OVS
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.210 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.254 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:23 np0005539552 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 02:55:23 np0005539552 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000014.scope: Consumed 1.666s CPU time.
Nov 29 02:55:23 np0005539552 systemd-machined[196379]: Machine qemu-6-instance-00000014 terminated.
Nov 29 02:55:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:23.277 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:bf:87 10.100.0.5'], port_security=['fa:16:3e:17:bf:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '23', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:23.279 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d unbound from our chassis#033[00m
Nov 29 02:55:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:23.280 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:55:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:23.281 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[37ff01b4-0228-4a27-a53c-b2caa62f21d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:23.282 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace which is not needed anymore#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.378 233728 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:55:23 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [NOTICE]   (244807) : haproxy version is 2.8.14-c23fe91
Nov 29 02:55:23 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [NOTICE]   (244807) : path to executable is /usr/sbin/haproxy
Nov 29 02:55:23 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [WARNING]  (244807) : Exiting Master process...
Nov 29 02:55:23 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [WARNING]  (244807) : Exiting Master process...
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.412 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.413 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.413 233728 DEBUG nova.network.neutron [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:23 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [ALERT]    (244807) : Current worker (244809) exited with code 143 (Terminated)
Nov 29 02:55:23 np0005539552 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[244803]: [WARNING]  (244807) : All workers exited. Exiting... (0)
Nov 29 02:55:23 np0005539552 systemd[1]: libpod-aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab.scope: Deactivated successfully.
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.420 233728 INFO nova.virt.libvirt.driver [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance destroyed successfully.#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.420 233728 DEBUG nova.objects.instance [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lazy-loading 'resources' on Instance uuid 7067462a-37a6-458e-b96c-76adcea5fdfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:23 np0005539552 podman[244941]: 2025-11-29 07:55:23.426207363 +0000 UTC m=+0.051127952 container died aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:23.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.607 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402908.6062577, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.608 233728 INFO nova.compute.manager [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.629 233728 DEBUG nova.virt.libvirt.vif [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:17Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.630 233728 DEBUG nova.network.os_vif_util [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.631 233728 DEBUG nova.network.os_vif_util [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.632 233728 DEBUG os_vif [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.655 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.656 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d7fa9ca-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.665 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.669 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:23 np0005539552 nova_compute[233724]: 2025-11-29 07:55:23.672 233728 INFO os_vif [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f')#033[00m
Nov 29 02:55:23 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab-userdata-shm.mount: Deactivated successfully.
Nov 29 02:55:23 np0005539552 systemd[1]: var-lib-containers-storage-overlay-4b6378c4582b7453d21600f9d2846538164f65d7d91cd6916a23281f2e16eadb-merged.mount: Deactivated successfully.
Nov 29 02:55:24 np0005539552 podman[244941]: 2025-11-29 07:55:24.056331716 +0000 UTC m=+0.681252315 container cleanup aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 02:55:24 np0005539552 systemd[1]: libpod-conmon-aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab.scope: Deactivated successfully.
Nov 29 02:55:24 np0005539552 nova_compute[233724]: 2025-11-29 07:55:24.087 233728 DEBUG nova.compute.manager [None req-8a851729-b5ce-4770-a7b2-ebff20a9d563 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:24 np0005539552 podman[245000]: 2025-11-29 07:55:24.120802879 +0000 UTC m=+0.040423880 container remove aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.126 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1b110e6d-d3ad-424d-855a-572df8627ad0]: (4, ('Sat Nov 29 07:55:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab)\naca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab\nSat Nov 29 07:55:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (aca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab)\naca79a20ddb8a40ca6fc8a6c12a8f11c376eea30a49f8894ec8f9f8a8adde6ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.128 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79c6e048-0273-4b23-9a5c-35df8b7e69f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.129 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:24 np0005539552 nova_compute[233724]: 2025-11-29 07:55:24.130 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:24 np0005539552 nova_compute[233724]: 2025-11-29 07:55:24.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:24 np0005539552 kernel: tap7a06a21a-b0: left promiscuous mode
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.135 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[43744c5e-21ec-42f8-a29c-683c406f1339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 nova_compute[233724]: 2025-11-29 07:55:24.145 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.150 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[188289f5-86f7-441e-89bd-c9f690ee7bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.150 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[49984270-5029-4f83-a70e-839b1ec094b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.163 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ed09088f-2ee4-485a-a1ae-02ed76c7b45d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595583, 'reachable_time': 18592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245015, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.165 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:55:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:24.165 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[41622d59-16be-43b7-a25b-5b889ad28258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:24 np0005539552 systemd[1]: run-netns-ovnmeta\x2d7a06a21a\x2dba04\x2d4a14\x2d8d62\x2dc931cbbf124d.mount: Deactivated successfully.
Nov 29 02:55:24 np0005539552 nova_compute[233724]: 2025-11-29 07:55:24.361 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:24.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:25.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:25 np0005539552 nova_compute[233724]: 2025-11-29 07:55:25.688 233728 INFO nova.virt.libvirt.driver [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deleting instance files /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa_del#033[00m
Nov 29 02:55:25 np0005539552 nova_compute[233724]: 2025-11-29 07:55:25.689 233728 INFO nova.virt.libvirt.driver [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deletion of /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa_del complete#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.055 233728 INFO nova.compute.manager [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 3.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.056 233728 DEBUG oslo.service.loopingcall [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.056 233728 DEBUG nova.compute.manager [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.057 233728 DEBUG nova.network.neutron [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.289 233728 DEBUG nova.compute.manager [req-8dd214fd-ebe2-4440-ba74-23c53b2389fb req-bbec4fd6-b2cc-4a62-8c6a-a5e98478c620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.289 233728 DEBUG oslo_concurrency.lockutils [req-8dd214fd-ebe2-4440-ba74-23c53b2389fb req-bbec4fd6-b2cc-4a62-8c6a-a5e98478c620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.289 233728 DEBUG oslo_concurrency.lockutils [req-8dd214fd-ebe2-4440-ba74-23c53b2389fb req-bbec4fd6-b2cc-4a62-8c6a-a5e98478c620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.290 233728 DEBUG oslo_concurrency.lockutils [req-8dd214fd-ebe2-4440-ba74-23c53b2389fb req-bbec4fd6-b2cc-4a62-8c6a-a5e98478c620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.290 233728 DEBUG nova.compute.manager [req-8dd214fd-ebe2-4440-ba74-23c53b2389fb req-bbec4fd6-b2cc-4a62-8c6a-a5e98478c620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:26 np0005539552 nova_compute[233724]: 2025-11-29 07:55:26.290 233728 DEBUG nova.compute.manager [req-8dd214fd-ebe2-4440-ba74-23c53b2389fb req-bbec4fd6-b2cc-4a62-8c6a-a5e98478c620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:26.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.015 233728 DEBUG nova.network.neutron [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.030 233728 INFO nova.compute.manager [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 0.97 seconds to deallocate network for instance.#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.050 233728 DEBUG nova.network.neutron [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.085 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.088 233728 DEBUG os_brick.utils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.089 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.100 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.101 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[afc29275-a609-477c-8af1-625c1895b37b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.103 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.112 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.112 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[722f3aa5-817e-499c-8ec9-faa0839c9757]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.114 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.125 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.126 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb7f5fb-61f1-4d67-96a4-e24468f179b1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.127 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e69afb82-2f3c-4b34-a3b2-e9588944a14b]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.128 233728 DEBUG oslo_concurrency.processutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.146 233728 DEBUG oslo_concurrency.processutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.151 233728 DEBUG nova.compute.manager [req-78b3b344-8db9-4977-9c3b-db28456386bd req-7b2d2ecb-912d-4aff-a999-2816a7f292a2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-deleted-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.152 233728 DEBUG os_brick.initiator.connectors.lightos [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.152 233728 DEBUG os_brick.initiator.connectors.lightos [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.152 233728 DEBUG os_brick.initiator.connectors.lightos [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.152 233728 DEBUG os_brick.utils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.260 233728 INFO nova.compute.manager [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.262 233728 DEBUG nova.compute.manager [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deleting volume: 3ef07b78-0409-49cf-a941-8a19b02dd939 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.476 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.477 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.484 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.509 233728 INFO nova.scheduler.client.report [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Deleted allocations for instance 7067462a-37a6-458e-b96c-76adcea5fdfa#033[00m
Nov 29 02:55:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:27.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:27 np0005539552 nova_compute[233724]: 2025-11-29 07:55:27.582 233728 DEBUG oslo_concurrency.lockutils [None req-e707d336-1940-49e2-8aec-25eea0c38904 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.399 233728 DEBUG nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.399 233728 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.400 233728 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.400 233728 DEBUG oslo_concurrency.lockutils [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.400 233728 DEBUG nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.401 233728 WARNING nova.compute.manager [req-929a966e-a8b9-4063-a183-67c27483130e req-7b4978eb-6954-44a2-8f11-4dbf5992098b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.506 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='b80f6614-6e08-4c17-b66e-c0f2a630e4d8'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.506 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating instance directory: /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.507 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Ensure instance console log exists: /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.507 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.509 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.510 233728 DEBUG nova.virt.libvirt.vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:16Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.510 233728 DEBUG nova.network.os_vif_util [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.511 233728 DEBUG nova.network.os_vif_util [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.511 233728 DEBUG os_vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.512 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.512 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.515 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.515 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1330295-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.515 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1330295-51, col_values=(('external_ids', {'iface-id': 'd1330295-51bc-4e64-a620-b63a6d8777fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:bc:90', 'vm-uuid': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.517 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:28 np0005539552 NetworkManager[48926]: <info>  [1764402928.5184] manager: (tapd1330295-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.518 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.522 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.523 233728 INFO os_vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51')#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.525 233728 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:55:28 np0005539552 nova_compute[233724]: 2025-11-29 07:55:28.526 233728 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='b80f6614-6e08-4c17-b66e-c0f2a630e4d8'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:55:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:28 np0005539552 podman[245082]: 2025-11-29 07:55:28.973439902 +0000 UTC m=+0.056671172 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:55:28 np0005539552 podman[245081]: 2025-11-29 07:55:28.983028143 +0000 UTC m=+0.058535053 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:29 np0005539552 nova_compute[233724]: 2025-11-29 07:55:29.363 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:29.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:29.653 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:29.654 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:55:29 np0005539552 nova_compute[233724]: 2025-11-29 07:55:29.705 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:30 np0005539552 podman[245121]: 2025-11-29 07:55:30.007760221 +0000 UTC m=+0.103163828 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:55:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:31.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.491425) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933491478, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 251, "total_data_size": 5684195, "memory_usage": 5757616, "flush_reason": "Manual Compaction"}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 29 02:55:33 np0005539552 nova_compute[233724]: 2025-11-29 07:55:33.520 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933531590, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3717364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24763, "largest_seqno": 27146, "table_properties": {"data_size": 3707924, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20321, "raw_average_key_size": 20, "raw_value_size": 3688722, "raw_average_value_size": 3710, "num_data_blocks": 260, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402719, "oldest_key_time": 1764402719, "file_creation_time": 1764402933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 40234 microseconds, and 8874 cpu microseconds.
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.531657) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3717364 bytes OK
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.531674) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.534032) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.534075) EVENT_LOG_v1 {"time_micros": 1764402933534065, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.534096) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5673636, prev total WAL file size 5689174, number of live WAL files 2.
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.535563) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3630KB)], [51(9546KB)]
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933535644, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 13493066, "oldest_snapshot_seqno": -1}
Nov 29 02:55:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5776 keys, 11293686 bytes, temperature: kUnknown
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933631892, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 11293686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253398, "index_size": 24754, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146985, "raw_average_key_size": 25, "raw_value_size": 11147459, "raw_average_value_size": 1929, "num_data_blocks": 1010, "num_entries": 5776, "num_filter_entries": 5776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764402933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.632175) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11293686 bytes
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.633679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.0 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 6293, records dropped: 517 output_compression: NoCompression
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.633697) EVENT_LOG_v1 {"time_micros": 1764402933633689, "job": 30, "event": "compaction_finished", "compaction_time_micros": 96346, "compaction_time_cpu_micros": 25591, "output_level": 6, "num_output_files": 1, "total_output_size": 11293686, "num_input_records": 6293, "num_output_records": 5776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933634360, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933636045, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.535432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.636195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.636202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.636204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.636205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:55:33.636206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539552 nova_compute[233724]: 2025-11-29 07:55:34.435 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:34 np0005539552 nova_compute[233724]: 2025-11-29 07:55:34.572 233728 DEBUG nova.network.neutron [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Port d1330295-51bc-4e64-a620-b63a6d8777fb updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:55:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:34.655 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:34.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:35 np0005539552 nova_compute[233724]: 2025-11-29 07:55:35.297 233728 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='b80f6614-6e08-4c17-b66e-c0f2a630e4d8'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:55:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:35 np0005539552 kernel: tapd1330295-51: entered promiscuous mode
Nov 29 02:55:35 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:35Z|00098|binding|INFO|Claiming lport d1330295-51bc-4e64-a620-b63a6d8777fb for this additional chassis.
Nov 29 02:55:35 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:35Z|00099|binding|INFO|d1330295-51bc-4e64-a620-b63a6d8777fb: Claiming fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:55:35 np0005539552 nova_compute[233724]: 2025-11-29 07:55:35.625 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:35 np0005539552 NetworkManager[48926]: <info>  [1764402935.6270] manager: (tapd1330295-51): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 02:55:35 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:35Z|00100|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb ovn-installed in OVS
Nov 29 02:55:35 np0005539552 nova_compute[233724]: 2025-11-29 07:55:35.642 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:35 np0005539552 nova_compute[233724]: 2025-11-29 07:55:35.645 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:35 np0005539552 systemd-machined[196379]: New machine qemu-7-instance-00000013.
Nov 29 02:55:35 np0005539552 systemd-udevd[245164]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:35 np0005539552 systemd[1]: Started Virtual Machine qemu-7-instance-00000013.
Nov 29 02:55:35 np0005539552 NetworkManager[48926]: <info>  [1764402935.6755] device (tapd1330295-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:55:35 np0005539552 NetworkManager[48926]: <info>  [1764402935.6765] device (tapd1330295-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:55:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:36.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.017 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402937.0168872, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.018 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.044 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.494 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764402937.4940114, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.495 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.525 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.529 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:37 np0005539552 nova_compute[233724]: 2025-11-29 07:55:37.553 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Nov 29 02:55:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:38 np0005539552 nova_compute[233724]: 2025-11-29 07:55:38.418 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402923.4172988, 7067462a-37a6-458e-b96c-76adcea5fdfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:38 np0005539552 nova_compute[233724]: 2025-11-29 07:55:38.419 233728 INFO nova.compute.manager [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:38 np0005539552 nova_compute[233724]: 2025-11-29 07:55:38.454 233728 DEBUG nova.compute.manager [None req-53b76898-cd41-4333-b9da-de278b1bc634 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:38 np0005539552 nova_compute[233724]: 2025-11-29 07:55:38.524 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:38.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:39 np0005539552 nova_compute[233724]: 2025-11-29 07:55:39.437 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:39.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:55:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1531854258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:55:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:55:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1531854258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:55:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:40.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:41.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:42.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:43 np0005539552 nova_compute[233724]: 2025-11-29 07:55:43.527 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:44 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:44Z|00101|binding|INFO|Claiming lport d1330295-51bc-4e64-a620-b63a6d8777fb for this chassis.
Nov 29 02:55:44 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:44Z|00102|binding|INFO|d1330295-51bc-4e64-a620-b63a6d8777fb: Claiming fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:55:44 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:44Z|00103|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb up in Southbound
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.253 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:bc:90 10.100.0.12'], port_security=['fa:16:3e:c2:bc:90 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '20', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d1330295-51bc-4e64-a620-b63a6d8777fb) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.254 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d1330295-51bc-4e64-a620-b63a6d8777fb in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 bound to our chassis#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.256 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.269 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f7cd9b-3a7a-4002-9efb-6f1cbfd7288d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.271 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad69a0f4-01 in ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.277 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad69a0f4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.277 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[16c068f2-4f7e-4dca-a4de-6c59bc1ce40c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.278 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b445cec8-995b-4929-b71b-3da13e45f826]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.294 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5df951-2f70-41d3-8283-788f203d133c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.312 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[044af92a-118b-4bfe-a7d0-a17713618d38]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.344 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[658b2beb-5f95-4dba-b73a-66f286cdb95f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.350 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[57573f9d-356b-4b8d-a690-e2498befdae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 NetworkManager[48926]: <info>  [1764402944.3517] manager: (tapad69a0f4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 02:55:44 np0005539552 systemd-udevd[245226]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.381 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[ab13cbfd-03eb-46e5-8185-d826ca87f3a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.385 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd274be-e133-4999-a2b7-304abfba4888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 NetworkManager[48926]: <info>  [1764402944.4060] device (tapad69a0f4-00): carrier: link connected
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.413 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[affc7ae5-88ee-4c36-a41a-d10729a838ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.431 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdf4d08-a376-4a94-8b77-f053054db7b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599009, 'reachable_time': 40448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245245, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 nova_compute[233724]: 2025-11-29 07:55:44.439 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.448 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[012ab36f-b0e2-411b-97ab-aef0cc415b77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:a12d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599009, 'tstamp': 599009}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245246, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.464 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5972d1c0-aaa6-404d-afd3-b823e01058c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599009, 'reachable_time': 40448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245247, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.496 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf14cee-2dcf-4aea-b0d2-542685e504b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[303418bb-4661-493a-afbe-8902f6be694f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.544 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.544 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.544 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad69a0f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:44 np0005539552 nova_compute[233724]: 2025-11-29 07:55:44.546 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539552 kernel: tapad69a0f4-00: entered promiscuous mode
Nov 29 02:55:44 np0005539552 NetworkManager[48926]: <info>  [1764402944.5469] manager: (tapad69a0f4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 02:55:44 np0005539552 nova_compute[233724]: 2025-11-29 07:55:44.548 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.548 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad69a0f4-00, col_values=(('external_ids', {'iface-id': '7ffec560-b868-40db-af88-b0deaaa81f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:44 np0005539552 nova_compute[233724]: 2025-11-29 07:55:44.549 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539552 ovn_controller[133798]: 2025-11-29T07:55:44Z|00104|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:55:44 np0005539552 nova_compute[233724]: 2025-11-29 07:55:44.565 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.567 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.568 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[709e70b5-9154-48b1-9f2d-95d38f90c91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.569 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:55:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:55:44.569 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'env', 'PROCESS_TAG=haproxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:55:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:44.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:44 np0005539552 podman[245283]: 2025-11-29 07:55:44.897044835 +0000 UTC m=+0.026902503 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:55:45 np0005539552 podman[245283]: 2025-11-29 07:55:45.140362854 +0000 UTC m=+0.270220492 container create c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:55:45 np0005539552 systemd[1]: Started libpod-conmon-c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72.scope.
Nov 29 02:55:45 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:55:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c520583890a14ac674c9c673a8c3865ecfed27c8a77de589ba486e76c0ef1009/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:45 np0005539552 podman[245283]: 2025-11-29 07:55:45.21703412 +0000 UTC m=+0.346891778 container init c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:45 np0005539552 podman[245283]: 2025-11-29 07:55:45.222151769 +0000 UTC m=+0.352009407 container start c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:55:45 np0005539552 nova_compute[233724]: 2025-11-29 07:55:45.232 233728 INFO nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Post operation of migration started#033[00m
Nov 29 02:55:45 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [NOTICE]   (245302) : New worker (245304) forked
Nov 29 02:55:45 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [NOTICE]   (245302) : Loading success.
Nov 29 02:55:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:45.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:45 np0005539552 nova_compute[233724]: 2025-11-29 07:55:45.961 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:45 np0005539552 nova_compute[233724]: 2025-11-29 07:55:45.962 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:45 np0005539552 nova_compute[233724]: 2025-11-29 07:55:45.962 233728 DEBUG nova.network.neutron [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:46.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:48 np0005539552 nova_compute[233724]: 2025-11-29 07:55:48.530 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:49 np0005539552 nova_compute[233724]: 2025-11-29 07:55:49.244 233728 DEBUG nova.network.neutron [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:49 np0005539552 nova_compute[233724]: 2025-11-29 07:55:49.442 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:49.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:50 np0005539552 nova_compute[233724]: 2025-11-29 07:55:50.282 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:50 np0005539552 nova_compute[233724]: 2025-11-29 07:55:50.611 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:50 np0005539552 nova_compute[233724]: 2025-11-29 07:55:50.612 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:50 np0005539552 nova_compute[233724]: 2025-11-29 07:55:50.612 233728 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:50 np0005539552 nova_compute[233724]: 2025-11-29 07:55:50.616 233728 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:55:50 np0005539552 virtqemud[233098]: Domain id=7 name='instance-00000013' uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 is tainted: custom-monitor
Nov 29 02:55:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:50.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:51.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:51 np0005539552 nova_compute[233724]: 2025-11-29 07:55:51.626 233728 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:55:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:55:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 11K writes, 48K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3263 syncs, 3.65 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5950 writes, 24K keys, 5950 commit groups, 1.0 writes per commit group, ingest: 29.25 MB, 0.05 MB/s#012Interval WAL: 5950 writes, 2175 syncs, 2.74 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:55:52 np0005539552 nova_compute[233724]: 2025-11-29 07:55:52.632 233728 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:55:52 np0005539552 nova_compute[233724]: 2025-11-29 07:55:52.637 233728 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:52 np0005539552 nova_compute[233724]: 2025-11-29 07:55:52.698 233728 DEBUG nova.objects.instance [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:55:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:53 np0005539552 nova_compute[233724]: 2025-11-29 07:55:53.542 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:53.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:54 np0005539552 nova_compute[233724]: 2025-11-29 07:55:54.444 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:55:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:55.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:55:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:57.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:58 np0005539552 nova_compute[233724]: 2025-11-29 07:55:58.546 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:58.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.446 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:55:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:59.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.955 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.956 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.956 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.956 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.957 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.958 233728 INFO nova.compute.manager [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Terminating instance#033[00m
Nov 29 02:55:59 np0005539552 nova_compute[233724]: 2025-11-29 07:55:59.959 233728 DEBUG nova.compute.manager [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:55:59 np0005539552 podman[245377]: 2025-11-29 07:55:59.972506921 +0000 UTC m=+0.061254434 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:59 np0005539552 podman[245378]: 2025-11-29 07:55:59.992559366 +0000 UTC m=+0.081277258 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:56:00 np0005539552 kernel: tapd1330295-51 (unregistering): left promiscuous mode
Nov 29 02:56:00 np0005539552 NetworkManager[48926]: <info>  [1764402960.0306] device (tapd1330295-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:00 np0005539552 ovn_controller[133798]: 2025-11-29T07:56:00Z|00105|binding|INFO|Releasing lport d1330295-51bc-4e64-a620-b63a6d8777fb from this chassis (sb_readonly=0)
Nov 29 02:56:00 np0005539552 ovn_controller[133798]: 2025-11-29T07:56:00Z|00106|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb down in Southbound
Nov 29 02:56:00 np0005539552 ovn_controller[133798]: 2025-11-29T07:56:00Z|00107|binding|INFO|Removing iface tapd1330295-51 ovn-installed in OVS
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.031 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.033 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.051 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.087 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:bc:90 10.100.0.12'], port_security=['fa:16:3e:c2:bc:90 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '22', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d1330295-51bc-4e64-a620-b63a6d8777fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.088 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d1330295-51bc-4e64-a620-b63a6d8777fb in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 unbound from our chassis#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.090 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.091 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cf893611-b59f-4498-a5b3-a3ed75f85e22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.091 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace which is not needed anymore#033[00m
Nov 29 02:56:00 np0005539552 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 29 02:56:00 np0005539552 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000013.scope: Consumed 2.181s CPU time.
Nov 29 02:56:00 np0005539552 systemd-machined[196379]: Machine qemu-7-instance-00000013 terminated.
Nov 29 02:56:00 np0005539552 podman[245415]: 2025-11-29 07:56:00.134441566 +0000 UTC m=+0.076705333 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.181 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.187 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.201 233728 INFO nova.virt.libvirt.driver [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance destroyed successfully.#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.201 233728 DEBUG nova.objects.instance [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lazy-loading 'resources' on Instance uuid 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:00 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [NOTICE]   (245302) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:00 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [NOTICE]   (245302) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:00 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [WARNING]  (245302) : Exiting Master process...
Nov 29 02:56:00 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [WARNING]  (245302) : Exiting Master process...
Nov 29 02:56:00 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [ALERT]    (245302) : Current worker (245304) exited with code 143 (Terminated)
Nov 29 02:56:00 np0005539552 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[245298]: [WARNING]  (245302) : All workers exited. Exiting... (0)
Nov 29 02:56:00 np0005539552 systemd[1]: libpod-c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72.scope: Deactivated successfully.
Nov 29 02:56:00 np0005539552 conmon[245298]: conmon c23600e9b1330999bf3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72.scope/container/memory.events
Nov 29 02:56:00 np0005539552 podman[245465]: 2025-11-29 07:56:00.224947004 +0000 UTC m=+0.052141587 container died c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:56:00 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.251 233728 DEBUG nova.virt.libvirt.vif [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:52Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.253 233728 DEBUG nova.network.os_vif_util [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.253 233728 DEBUG nova.network.os_vif_util [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:00 np0005539552 systemd[1]: var-lib-containers-storage-overlay-c520583890a14ac674c9c673a8c3865ecfed27c8a77de589ba486e76c0ef1009-merged.mount: Deactivated successfully.
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.254 233728 DEBUG os_vif [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.256 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.257 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1330295-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.259 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.262 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.264 233728 INFO os_vif [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51')#033[00m
Nov 29 02:56:00 np0005539552 podman[245465]: 2025-11-29 07:56:00.264525188 +0000 UTC m=+0.091719781 container cleanup c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:00 np0005539552 systemd[1]: libpod-conmon-c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72.scope: Deactivated successfully.
Nov 29 02:56:00 np0005539552 podman[245506]: 2025-11-29 07:56:00.343824351 +0000 UTC m=+0.054299595 container remove c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.350 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[717c772a-c839-43ff-802b-8920dc3c7427]: (4, ('Sat Nov 29 07:56:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72)\nc23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72\nSat Nov 29 07:56:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (c23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72)\nc23600e9b1330999bf3e59df02862575016bfc850e4deb89c5354e59a759bf72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.351 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26bf2a12-b0ca-4a66-a59a-d0aa8f4252a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.352 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.390 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 kernel: tapad69a0f4-00: left promiscuous mode
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.404 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.406 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab44340-d650-496b-bb42-e2d1aa4fab19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.424 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[170b0fca-29be-421a-982c-83ee2ea1a6d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.425 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc721b6-156a-42f4-8e08-86f3372cafa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.439 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[630b02f9-16cb-487b-b416-c21bfad79bfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599002, 'reachable_time': 40523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245536, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 systemd[1]: run-netns-ovnmeta\x2dad69a0f4\x2d0000\x2d474b\x2d9649\x2d72cf1bf9f5c1.mount: Deactivated successfully.
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.443 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:00.443 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfd2a38-67da-4191-aa9f-08f8ff916373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.604 233728 DEBUG nova.compute.manager [req-dbfcab14-b234-4f49-8cc6-fb052c4bdc29 req-bb44403f-38c9-448e-9bbb-28093f9b003e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.604 233728 DEBUG oslo_concurrency.lockutils [req-dbfcab14-b234-4f49-8cc6-fb052c4bdc29 req-bb44403f-38c9-448e-9bbb-28093f9b003e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.604 233728 DEBUG oslo_concurrency.lockutils [req-dbfcab14-b234-4f49-8cc6-fb052c4bdc29 req-bb44403f-38c9-448e-9bbb-28093f9b003e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.604 233728 DEBUG oslo_concurrency.lockutils [req-dbfcab14-b234-4f49-8cc6-fb052c4bdc29 req-bb44403f-38c9-448e-9bbb-28093f9b003e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.605 233728 DEBUG nova.compute.manager [req-dbfcab14-b234-4f49-8cc6-fb052c4bdc29 req-bb44403f-38c9-448e-9bbb-28093f9b003e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:00 np0005539552 nova_compute[233724]: 2025-11-29 07:56:00.605 233728 DEBUG nova.compute.manager [req-dbfcab14-b234-4f49-8cc6-fb052c4bdc29 req-bb44403f-38c9-448e-9bbb-28093f9b003e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:01.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:02.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:02 np0005539552 nova_compute[233724]: 2025-11-29 07:56:02.881 233728 DEBUG nova.compute.manager [req-43591b15-a2f4-4080-9313-8887005adf31 req-5b7b2783-01c5-43ef-a0a8-dacc3c9ec315 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:02 np0005539552 nova_compute[233724]: 2025-11-29 07:56:02.882 233728 DEBUG oslo_concurrency.lockutils [req-43591b15-a2f4-4080-9313-8887005adf31 req-5b7b2783-01c5-43ef-a0a8-dacc3c9ec315 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:02 np0005539552 nova_compute[233724]: 2025-11-29 07:56:02.882 233728 DEBUG oslo_concurrency.lockutils [req-43591b15-a2f4-4080-9313-8887005adf31 req-5b7b2783-01c5-43ef-a0a8-dacc3c9ec315 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:02 np0005539552 nova_compute[233724]: 2025-11-29 07:56:02.882 233728 DEBUG oslo_concurrency.lockutils [req-43591b15-a2f4-4080-9313-8887005adf31 req-5b7b2783-01c5-43ef-a0a8-dacc3c9ec315 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:02 np0005539552 nova_compute[233724]: 2025-11-29 07:56:02.883 233728 DEBUG nova.compute.manager [req-43591b15-a2f4-4080-9313-8887005adf31 req-5b7b2783-01c5-43ef-a0a8-dacc3c9ec315 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:02 np0005539552 nova_compute[233724]: 2025-11-29 07:56:02.883 233728 WARNING nova.compute.manager [req-43591b15-a2f4-4080-9313-8887005adf31 req-5b7b2783-01c5-43ef-a0a8-dacc3c9ec315 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:56:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:03.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:04 np0005539552 nova_compute[233724]: 2025-11-29 07:56:04.449 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:04.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.071 233728 INFO nova.virt.libvirt.driver [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deleting instance files /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_del#033[00m
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.074 233728 INFO nova.virt.libvirt.driver [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deletion of /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_del complete#033[00m
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.310 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.449 233728 INFO nova.compute.manager [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 5.49 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.450 233728 DEBUG oslo.service.loopingcall [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.450 233728 DEBUG nova.compute.manager [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:05 np0005539552 nova_compute[233724]: 2025-11-29 07:56:05.450 233728 DEBUG nova.network.neutron [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:05.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:06.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539552 nova_compute[233724]: 2025-11-29 07:56:07.486 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:07 np0005539552 nova_compute[233724]: 2025-11-29 07:56:07.486 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:56:07 np0005539552 nova_compute[233724]: 2025-11-29 07:56:07.486 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:56:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:07.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.053 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.053 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.054 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.054 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.054 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.054 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.451 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.486 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:09.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.634 233728 DEBUG nova.network.neutron [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.653 233728 INFO nova.compute.manager [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 4.20 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.858 233728 DEBUG nova.compute.manager [req-3a6e396b-5c0e-4490-ade0-e67bf66d85e3 req-5b12148f-537c-44cf-ad29-ab00c09c834c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-deleted-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:09 np0005539552 nova_compute[233724]: 2025-11-29 07:56:09.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.178 233728 INFO nova.compute.manager [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 0.53 seconds to detach 1 volumes for instance.#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.180 233728 DEBUG nova.compute.manager [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deleting volume: e52d8ac1-8970-4cf0-9aa0-795f616090d0 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.564 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.564 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.581 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.640 233728 INFO nova.scheduler.client.report [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Deleted allocations for instance 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9#033[00m
Nov 29 02:56:10 np0005539552 nova_compute[233724]: 2025-11-29 07:56:10.753 233728 DEBUG oslo_concurrency.lockutils [None req-c8a19de3-3b56-4c32-a82f-6917286fcfcc b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:10.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:56:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3499489055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:56:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:56:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3499489055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:56:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:11.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:11 np0005539552 nova_compute[233724]: 2025-11-29 07:56:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539552 nova_compute[233724]: 2025-11-29 07:56:11.965 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:11 np0005539552 nova_compute[233724]: 2025-11-29 07:56:11.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:11 np0005539552 nova_compute[233724]: 2025-11-29 07:56:11.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:11 np0005539552 nova_compute[233724]: 2025-11-29 07:56:11.966 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:56:11 np0005539552 nova_compute[233724]: 2025-11-29 07:56:11.967 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3241876536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.389 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.551 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.553 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4803MB free_disk=20.76153564453125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.553 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.553 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.671 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.671 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:56:12 np0005539552 nova_compute[233724]: 2025-11-29 07:56:12.720 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:12.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/401104543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:13 np0005539552 nova_compute[233724]: 2025-11-29 07:56:13.160 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:13 np0005539552 nova_compute[233724]: 2025-11-29 07:56:13.166 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:13 np0005539552 nova_compute[233724]: 2025-11-29 07:56:13.222 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:13 np0005539552 nova_compute[233724]: 2025-11-29 07:56:13.250 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:56:13 np0005539552 nova_compute[233724]: 2025-11-29 07:56:13.251 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:13.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:14 np0005539552 nova_compute[233724]: 2025-11-29 07:56:14.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:14.825 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:14.826 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:14 np0005539552 nova_compute[233724]: 2025-11-29 07:56:14.827 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:15 np0005539552 nova_compute[233724]: 2025-11-29 07:56:15.199 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402960.1979966, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:15 np0005539552 nova_compute[233724]: 2025-11-29 07:56:15.200 233728 INFO nova.compute.manager [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:15 np0005539552 nova_compute[233724]: 2025-11-29 07:56:15.237 233728 DEBUG nova.compute.manager [None req-493c9180-0112-4d23-bfd2-cbc4c3b69b2b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:15 np0005539552 nova_compute[233724]: 2025-11-29 07:56:15.314 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:15.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:16.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:16 np0005539552 nova_compute[233724]: 2025-11-29 07:56:16.941 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:18.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:19 np0005539552 nova_compute[233724]: 2025-11-29 07:56:19.453 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:20 np0005539552 nova_compute[233724]: 2025-11-29 07:56:20.316 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:20.604 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:20.605 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:20.605 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:20.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:22.829 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:24 np0005539552 nova_compute[233724]: 2025-11-29 07:56:24.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:24.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:25 np0005539552 nova_compute[233724]: 2025-11-29 07:56:25.318 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:27.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:28.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:29 np0005539552 nova_compute[233724]: 2025-11-29 07:56:29.457 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:30 np0005539552 nova_compute[233724]: 2025-11-29 07:56:30.362 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:30.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:30 np0005539552 podman[246003]: 2025-11-29 07:56:30.980141712 +0000 UTC m=+0.066501967 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:30 np0005539552 podman[246002]: 2025-11-29 07:56:30.992937339 +0000 UTC m=+0.079260823 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:56:31 np0005539552 podman[246004]: 2025-11-29 07:56:31.008979825 +0000 UTC m=+0.092190884 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:56:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:32.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:33.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:34 np0005539552 nova_compute[233724]: 2025-11-29 07:56:34.458 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:35 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539552 nova_compute[233724]: 2025-11-29 07:56:35.417 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:35.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:35 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:56:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:36.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:56:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533368667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:56:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:56:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533368667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:56:39 np0005539552 nova_compute[233724]: 2025-11-29 07:56:39.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:40 np0005539552 nova_compute[233724]: 2025-11-29 07:56:40.449 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:40.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:41.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:42.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:43.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:44 np0005539552 nova_compute[233724]: 2025-11-29 07:56:44.461 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:44.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:45 np0005539552 nova_compute[233724]: 2025-11-29 07:56:45.450 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:45.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:46.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:47.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:48.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:49 np0005539552 nova_compute[233724]: 2025-11-29 07:56:49.462 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:49.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:50 np0005539552 nova_compute[233724]: 2025-11-29 07:56:50.453 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:50.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:51.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:52.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:53.463 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:53.464 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:53 np0005539552 nova_compute[233724]: 2025-11-29 07:56:53.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:53.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:54 np0005539552 nova_compute[233724]: 2025-11-29 07:56:54.463 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:55 np0005539552 nova_compute[233724]: 2025-11-29 07:56:55.454 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:56:56.466 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:56.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:57.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:59 np0005539552 nova_compute[233724]: 2025-11-29 07:56:59.465 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:59.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:56:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:00 np0005539552 nova_compute[233724]: 2025-11-29 07:57:00.455 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.049 233728 DEBUG nova.compute.manager [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.163 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.164 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.185 233728 DEBUG nova.objects.instance [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.203 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.203 233728 INFO nova.compute.claims [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.204 233728 DEBUG nova.objects.instance [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'resources' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.215 233728 DEBUG nova.objects.instance [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.229 233728 DEBUG nova.objects.instance [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.287 233728 INFO nova.compute.resource_tracker [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating resource usage from migration c2903281-4709-4a60-aee1-a45ec2b3f2d4#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.287 233728 DEBUG nova.compute.resource_tracker [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Starting to track incoming migration c2903281-4709-4a60-aee1-a45ec2b3f2d4 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.331 233728 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:01.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3786343266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.733 233728 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.738 233728 DEBUG nova.compute.provider_tree [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.754 233728 DEBUG nova.scheduler.client.report [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.781 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:01 np0005539552 nova_compute[233724]: 2025-11-29 07:57:01.781 233728 INFO nova.compute.manager [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Migrating#033[00m
Nov 29 02:57:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:01 np0005539552 podman[246160]: 2025-11-29 07:57:01.967795607 +0000 UTC m=+0.050203614 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:57:01 np0005539552 podman[246159]: 2025-11-29 07:57:01.977427348 +0000 UTC m=+0.061569722 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:57:01 np0005539552 podman[246161]: 2025-11-29 07:57:01.997104193 +0000 UTC m=+0.077220208 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:57:02 np0005539552 systemd-logind[788]: New session 53 of user nova.
Nov 29 02:57:02 np0005539552 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 02:57:02 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 02:57:03 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 02:57:03 np0005539552 systemd[1]: Starting User Manager for UID 42436...
Nov 29 02:57:03 np0005539552 systemd[246227]: Queued start job for default target Main User Target.
Nov 29 02:57:03 np0005539552 systemd[246227]: Created slice User Application Slice.
Nov 29 02:57:03 np0005539552 systemd[246227]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:57:03 np0005539552 systemd[246227]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:57:03 np0005539552 systemd[246227]: Reached target Paths.
Nov 29 02:57:03 np0005539552 systemd[246227]: Reached target Timers.
Nov 29 02:57:03 np0005539552 systemd[246227]: Starting D-Bus User Message Bus Socket...
Nov 29 02:57:03 np0005539552 systemd[246227]: Starting Create User's Volatile Files and Directories...
Nov 29 02:57:03 np0005539552 systemd[246227]: Finished Create User's Volatile Files and Directories.
Nov 29 02:57:03 np0005539552 systemd[246227]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:57:03 np0005539552 systemd[246227]: Reached target Sockets.
Nov 29 02:57:03 np0005539552 systemd[246227]: Reached target Basic System.
Nov 29 02:57:03 np0005539552 systemd[246227]: Reached target Main User Target.
Nov 29 02:57:03 np0005539552 systemd[246227]: Startup finished in 133ms.
Nov 29 02:57:03 np0005539552 systemd[1]: Started User Manager for UID 42436.
Nov 29 02:57:03 np0005539552 systemd[1]: Started Session 53 of User nova.
Nov 29 02:57:03 np0005539552 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 02:57:03 np0005539552 systemd-logind[788]: Session 53 logged out. Waiting for processes to exit.
Nov 29 02:57:03 np0005539552 systemd-logind[788]: Removed session 53.
Nov 29 02:57:03 np0005539552 systemd-logind[788]: New session 55 of user nova.
Nov 29 02:57:03 np0005539552 systemd[1]: Started Session 55 of User nova.
Nov 29 02:57:03 np0005539552 systemd[1]: session-55.scope: Deactivated successfully.
Nov 29 02:57:03 np0005539552 systemd-logind[788]: Session 55 logged out. Waiting for processes to exit.
Nov 29 02:57:03 np0005539552 systemd-logind[788]: Removed session 55.
Nov 29 02:57:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:04 np0005539552 nova_compute[233724]: 2025-11-29 07:57:04.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:05 np0005539552 nova_compute[233724]: 2025-11-29 07:57:05.458 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:57:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:57:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:08 np0005539552 nova_compute[233724]: 2025-11-29 07:57:08.252 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:08 np0005539552 nova_compute[233724]: 2025-11-29 07:57:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:08 np0005539552 nova_compute[233724]: 2025-11-29 07:57:08.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:57:08 np0005539552 nova_compute[233724]: 2025-11-29 07:57:08.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:57:08 np0005539552 nova_compute[233724]: 2025-11-29 07:57:08.937 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:57:09 np0005539552 nova_compute[233724]: 2025-11-29 07:57:09.469 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:09 np0005539552 nova_compute[233724]: 2025-11-29 07:57:09.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:09 np0005539552 nova_compute[233724]: 2025-11-29 07:57:09.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:09 np0005539552 nova_compute[233724]: 2025-11-29 07:57:09.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:09 np0005539552 nova_compute[233724]: 2025-11-29 07:57:09.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:10 np0005539552 nova_compute[233724]: 2025-11-29 07:57:10.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:10 np0005539552 nova_compute[233724]: 2025-11-29 07:57:10.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539552 nova_compute[233724]: 2025-11-29 07:57:10.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539552 nova_compute[233724]: 2025-11-29 07:57:10.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:57:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:11 np0005539552 nova_compute[233724]: 2025-11-29 07:57:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:11.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:11 np0005539552 nova_compute[233724]: 2025-11-29 07:57:11.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:11 np0005539552 nova_compute[233724]: 2025-11-29 07:57:11.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:11 np0005539552 nova_compute[233724]: 2025-11-29 07:57:11.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:11 np0005539552 nova_compute[233724]: 2025-11-29 07:57:11.951 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:57:11 np0005539552 nova_compute[233724]: 2025-11-29 07:57:11.952 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/507993185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.366 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.542 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.543 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4853MB free_disk=20.917617797851562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.543 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.543 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.590 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration for instance 9eb89c30-3f33-4a7c-ae19-8312a2522b82 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.614 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating resource usage from migration c2903281-4709-4a60-aee1-a45ec2b3f2d4#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.614 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Starting to track incoming migration c2903281-4709-4a60-aee1-a45ec2b3f2d4 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.664 233728 WARNING nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 9eb89c30-3f33-4a7c-ae19-8312a2522b82 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.664 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.665 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:57:12 np0005539552 nova_compute[233724]: 2025-11-29 07:57:12.715 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3634152292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:13 np0005539552 nova_compute[233724]: 2025-11-29 07:57:13.138 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:13 np0005539552 nova_compute[233724]: 2025-11-29 07:57:13.147 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:13 np0005539552 nova_compute[233724]: 2025-11-29 07:57:13.173 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:13 np0005539552 nova_compute[233724]: 2025-11-29 07:57:13.174 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:57:13 np0005539552 nova_compute[233724]: 2025-11-29 07:57:13.174 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:13 np0005539552 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 02:57:13 np0005539552 systemd[246227]: Activating special unit Exit the Session...
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped target Main User Target.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped target Basic System.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped target Paths.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped target Sockets.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped target Timers.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:57:13 np0005539552 systemd[246227]: Closed D-Bus User Message Bus Socket.
Nov 29 02:57:13 np0005539552 systemd[246227]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:57:13 np0005539552 systemd[246227]: Removed slice User Application Slice.
Nov 29 02:57:13 np0005539552 systemd[246227]: Reached target Shutdown.
Nov 29 02:57:13 np0005539552 systemd[246227]: Finished Exit the Session.
Nov 29 02:57:13 np0005539552 systemd[246227]: Reached target Exit the Session.
Nov 29 02:57:13 np0005539552 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 02:57:13 np0005539552 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 02:57:13 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 02:57:13 np0005539552 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 02:57:13 np0005539552 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 02:57:13 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 02:57:13 np0005539552 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 02:57:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:13.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:57:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:57:14 np0005539552 nova_compute[233724]: 2025-11-29 07:57:14.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:15 np0005539552 nova_compute[233724]: 2025-11-29 07:57:15.501 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:15.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:15.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:16 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:16Z|00108|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 02:57:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:17.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:57:19 np0005539552 nova_compute[233724]: 2025-11-29 07:57:19.473 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:19 np0005539552 nova_compute[233724]: 2025-11-29 07:57:19.951 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:19 np0005539552 nova_compute[233724]: 2025-11-29 07:57:19.951 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquired lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:19 np0005539552 nova_compute[233724]: 2025-11-29 07:57:19.952 233728 DEBUG nova.network.neutron [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.088 233728 DEBUG nova.network.neutron [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.439 233728 DEBUG nova.network.neutron [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.459 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Releasing lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.557 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.559 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.559 233728 INFO nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Creating image(s)#033[00m
Nov 29 02:57:20 np0005539552 nova_compute[233724]: 2025-11-29 07:57:20.595 233728 DEBUG nova.storage.rbd_utils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] creating snapshot(nova-resize) on rbd image(9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:57:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:20.605 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:20.605 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:20.605 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:21.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.001 233728 DEBUG nova.objects.instance [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.097 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.098 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Ensure instance console log exists: /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.098 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.099 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.099 233728 DEBUG oslo_concurrency.lockutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.100 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.104 233728 WARNING nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.110 233728 DEBUG nova.virt.libvirt.host [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.111 233728 DEBUG nova.virt.libvirt.host [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.117 233728 DEBUG nova.virt.libvirt.host [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.118 233728 DEBUG nova.virt.libvirt.host [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.119 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.119 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.119 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.120 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.120 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.120 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.120 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.120 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.121 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.121 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.121 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.121 233728 DEBUG nova.virt.hardware [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.121 233728 DEBUG nova.objects.instance [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:23 np0005539552 nova_compute[233724]: 2025-11-29 07:57:23.139 233728 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:23.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3772845287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:23.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.019 233728 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.053 233728 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2795908412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.697 233728 DEBUG oslo_concurrency.processutils [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.701 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <uuid>9eb89c30-3f33-4a7c-ae19-8312a2522b82</uuid>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <name>instance-0000001a</name>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:name>tempest-MigrationsAdminTest-server-71474262</nova:name>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:57:23</nova:creationTime>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <entry name="serial">9eb89c30-3f33-4a7c-ae19-8312a2522b82</entry>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <entry name="uuid">9eb89c30-3f33-4a7c-ae19-8312a2522b82</entry>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9eb89c30-3f33-4a7c-ae19-8312a2522b82_disk.config">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82/console.log" append="off"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:57:24 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:57:24 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:57:24 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:57:24 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.759 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.760 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:24 np0005539552 nova_compute[233724]: 2025-11-29 07:57:24.760 233728 INFO nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Using config drive#033[00m
Nov 29 02:57:24 np0005539552 systemd-machined[196379]: New machine qemu-8-instance-0000001a.
Nov 29 02:57:24 np0005539552 systemd[1]: Started Virtual Machine qemu-8-instance-0000001a.
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.618 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403045.617456, 9eb89c30-3f33-4a7c-ae19-8312a2522b82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.618 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.620 233728 DEBUG nova.compute.manager [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.625 233728 INFO nova.virt.libvirt.driver [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance running successfully.#033[00m
Nov 29 02:57:25 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.627 233728 DEBUG nova.virt.libvirt.guest [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.628 233728 DEBUG nova.virt.libvirt.driver [None req-12ea4bfa-9acd-4abe-ad3a-7a842aa3f1d2 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.647 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.650 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.692 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.692 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403045.6214385, 9eb89c30-3f33-4a7c-ae19-8312a2522b82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.692 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.719 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:25 np0005539552 nova_compute[233724]: 2025-11-29 07:57:25.723 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:25.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 02:57:29 np0005539552 nova_compute[233724]: 2025-11-29 07:57:29.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:30 np0005539552 nova_compute[233724]: 2025-11-29 07:57:30.511 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:31 np0005539552 nova_compute[233724]: 2025-11-29 07:57:31.702 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:31.701 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:31.702 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:57:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:31.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:33 np0005539552 podman[246803]: 2025-11-29 07:57:33.009057903 +0000 UTC m=+0.085806021 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:57:33 np0005539552 podman[246802]: 2025-11-29 07:57:33.021711676 +0000 UTC m=+0.098675010 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:57:33 np0005539552 podman[246804]: 2025-11-29 07:57:33.02186358 +0000 UTC m=+0.097026185 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:57:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:33.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:34 np0005539552 nova_compute[233724]: 2025-11-29 07:57:34.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 02:57:35 np0005539552 nova_compute[233724]: 2025-11-29 07:57:35.549 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:35.704 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:35.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:37 np0005539552 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:57:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:57:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685150126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:57:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:57:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685150126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:57:39 np0005539552 nova_compute[233724]: 2025-11-29 07:57:39.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:39.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:39.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 02:57:40 np0005539552 nova_compute[233724]: 2025-11-29 07:57:40.551 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:41.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.790 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.790 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.823 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.960 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.961 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.969 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:41 np0005539552 nova_compute[233724]: 2025-11-29 07:57:41.969 233728 INFO nova.compute.claims [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:57:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:41.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.095 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2424453309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.551 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.557 233728 DEBUG nova.compute.provider_tree [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.607 233728 DEBUG nova.scheduler.client.report [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.684 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.685 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.794 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.795 233728 DEBUG nova.network.neutron [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.800 233728 DEBUG nova.compute.manager [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.822 233728 INFO nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.871 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.893 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.893 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.922 233728 DEBUG nova.objects.instance [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_requests' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.944 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.945 233728 INFO nova.compute.claims [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.945 233728 DEBUG nova.objects.instance [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.965 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.966 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.967 233728 INFO nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Creating image(s)#033[00m
Nov 29 02:57:42 np0005539552 nova_compute[233724]: 2025-11-29 07:57:42.993 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.020 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.046 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.049 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "d8f87e6814a39f74799532642e7be3e998da5505" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.050 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "d8f87e6814a39f74799532642e7be3e998da5505" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.056 233728 DEBUG nova.objects.instance [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_devices' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.095 233728 DEBUG nova.policy [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '07c7f32f7d494f29b99afe2b074d0f68', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4004f4fd97dd4d0e8e83dd715ffb8e9c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.121 233728 INFO nova.compute.resource_tracker [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating resource usage from migration 9ecb55cd-42bc-44df-9e05-e6d283102cb3#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.122 233728 DEBUG nova.compute.resource_tracker [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Starting to track incoming migration 9ecb55cd-42bc-44df-9e05-e6d283102cb3 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.249 233728 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3795090893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.674 233728 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.680 233728 DEBUG nova.compute.provider_tree [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.704 233728 DEBUG nova.scheduler.client.report [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:43.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.723 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.723 233728 INFO nova.compute.manager [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Migrating#033[00m
Nov 29 02:57:43 np0005539552 nova_compute[233724]: 2025-11-29 07:57:43.798 233728 DEBUG nova.virt.libvirt.imagebackend [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/ca49c32b-cb06-40a8-be71-35eeb05e9ca2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/ca49c32b-cb06-40a8-be71-35eeb05e9ca2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:57:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:57:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:57:44 np0005539552 nova_compute[233724]: 2025-11-29 07:57:44.369 233728 DEBUG nova.network.neutron [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Successfully created port: 2e934383-f346-4336-b1b8-e866fb05bef3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:57:44 np0005539552 nova_compute[233724]: 2025-11-29 07:57:44.484 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:44 np0005539552 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 02:57:44 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 02:57:44 np0005539552 systemd-logind[788]: New session 56 of user nova.
Nov 29 02:57:44 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 02:57:44 np0005539552 systemd[1]: Starting User Manager for UID 42436...
Nov 29 02:57:44 np0005539552 systemd[246973]: Queued start job for default target Main User Target.
Nov 29 02:57:44 np0005539552 systemd[246973]: Created slice User Application Slice.
Nov 29 02:57:44 np0005539552 systemd[246973]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:57:44 np0005539552 systemd[246973]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:57:44 np0005539552 systemd[246973]: Reached target Paths.
Nov 29 02:57:44 np0005539552 systemd[246973]: Reached target Timers.
Nov 29 02:57:44 np0005539552 systemd[246973]: Starting D-Bus User Message Bus Socket...
Nov 29 02:57:44 np0005539552 systemd[246973]: Starting Create User's Volatile Files and Directories...
Nov 29 02:57:44 np0005539552 systemd[246973]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:57:44 np0005539552 systemd[246973]: Reached target Sockets.
Nov 29 02:57:44 np0005539552 systemd[246973]: Finished Create User's Volatile Files and Directories.
Nov 29 02:57:44 np0005539552 systemd[246973]: Reached target Basic System.
Nov 29 02:57:44 np0005539552 systemd[246973]: Reached target Main User Target.
Nov 29 02:57:44 np0005539552 systemd[246973]: Startup finished in 145ms.
Nov 29 02:57:44 np0005539552 systemd[1]: Started User Manager for UID 42436.
Nov 29 02:57:44 np0005539552 systemd[1]: Started Session 56 of User nova.
Nov 29 02:57:44 np0005539552 systemd[1]: session-56.scope: Deactivated successfully.
Nov 29 02:57:44 np0005539552 systemd-logind[788]: Session 56 logged out. Waiting for processes to exit.
Nov 29 02:57:45 np0005539552 systemd-logind[788]: Removed session 56.
Nov 29 02:57:45 np0005539552 systemd-logind[788]: New session 58 of user nova.
Nov 29 02:57:45 np0005539552 systemd[1]: Started Session 58 of User nova.
Nov 29 02:57:45 np0005539552 systemd[1]: session-58.scope: Deactivated successfully.
Nov 29 02:57:45 np0005539552 systemd-logind[788]: Session 58 logged out. Waiting for processes to exit.
Nov 29 02:57:45 np0005539552 systemd-logind[788]: Removed session 58.
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.373 233728 DEBUG nova.network.neutron [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Successfully updated port: 2e934383-f346-4336-b1b8-e866fb05bef3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.402 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.402 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquired lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.402 233728 DEBUG nova.network.neutron [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.544 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.571 233728 DEBUG nova.compute.manager [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-changed-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.571 233728 DEBUG nova.compute.manager [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Refreshing instance network info cache due to event network-changed-2e934383-f346-4336-b1b8-e866fb05bef3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.572 233728 DEBUG oslo_concurrency.lockutils [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.598 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.604 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.part --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.605 233728 DEBUG nova.virt.images [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] ca49c32b-cb06-40a8-be71-35eeb05e9ca2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.605 233728 DEBUG nova.privsep.utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.606 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.part /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:57:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:45.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.770 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.part /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.converted" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.776 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.839 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505.converted --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.841 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "d8f87e6814a39f74799532642e7be3e998da5505" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.875 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.880 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:45 np0005539552 nova_compute[233724]: 2025-11-29 07:57:45.915 233728 DEBUG nova.network.neutron [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.204 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.288 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] resizing rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:57:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.614 233728 DEBUG nova.objects.instance [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lazy-loading 'migration_context' on Instance uuid 8b9831df-d3ee-436b-af4e-ad429c1c0227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.639 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.640 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Ensure instance console log exists: /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.641 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.641 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:46 np0005539552 nova_compute[233724]: 2025-11-29 07:57:46.641 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.356 233728 DEBUG nova.network.neutron [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updating instance_info_cache with network_info: [{"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.378 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Releasing lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.378 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Instance network_info: |[{"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.379 233728 DEBUG oslo_concurrency.lockutils [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.379 233728 DEBUG nova.network.neutron [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Refreshing network info cache for port 2e934383-f346-4336-b1b8-e866fb05bef3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.382 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Start _get_guest_xml network_info=[{"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:57:33Z,direct_url=<?>,disk_format='qcow2',id=ca49c32b-cb06-40a8-be71-35eeb05e9ca2,min_disk=0,min_ram=0,name='',owner='fefd450a22f6433fbea03a8da1e1492d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:57:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'device_name': '/dev/sda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'scsi', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': 'ca49c32b-cb06-40a8-be71-35eeb05e9ca2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.387 233728 WARNING nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.391 233728 DEBUG nova.virt.libvirt.host [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.392 233728 DEBUG nova.virt.libvirt.host [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.399 233728 DEBUG nova.virt.libvirt.host [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.400 233728 DEBUG nova.virt.libvirt.host [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.401 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.401 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:57:33Z,direct_url=<?>,disk_format='qcow2',id=ca49c32b-cb06-40a8-be71-35eeb05e9ca2,min_disk=0,min_ram=0,name='',owner='fefd450a22f6433fbea03a8da1e1492d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:57:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.401 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.401 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.402 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.402 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.402 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.402 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.402 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.403 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.403 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.403 233728 DEBUG nova.virt.hardware [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.406 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2216399392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.835 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.864 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:47 np0005539552 nova_compute[233724]: 2025-11-29 07:57:47.868 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:47.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2862590396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.301 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.303 233728 DEBUG nova.virt.libvirt.vif [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1964225932',display_name='tempest-AttachSCSIVolumeTestJSON-server-1964225932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1964225932',id=28,image_ref='ca49c32b-cb06-40a8-be71-35eeb05e9ca2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzELH7XLAV22uc90G7zhFInp2AEhOf5gIszA4btoNbMLg4GwRtbFwOgsjAdQ48WfVVFvrsN82F0fsoTLA3D6IXsd/RB/ZhS2LC3RCEPZmLU5GMeM2SAv5CXk19zhMmOUw==',key_name='tempest-keypair-80928421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4004f4fd97dd4d0e8e83dd715ffb8e9c',ramdisk_id='',reservation_id='r-3rqc177r',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca49c32b-cb06-40a8-be71-35eeb05e9ca2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-139358515',owner_user_name='tempest-AttachSCSIVolumeTestJSON-139358515-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='07c7f32f7d494f29b99afe2b074d0f68',uuid=8b9831df-d3ee-436b-af4e-ad429c1c0227,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.303 233728 DEBUG nova.network.os_vif_util [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Converting VIF {"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.304 233728 DEBUG nova.network.os_vif_util [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.305 233728 DEBUG nova.objects.instance [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b9831df-d3ee-436b-af4e-ad429c1c0227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.323 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <uuid>8b9831df-d3ee-436b-af4e-ad429c1c0227</uuid>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <name>instance-0000001c</name>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:name>tempest-AttachSCSIVolumeTestJSON-server-1964225932</nova:name>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:57:47</nova:creationTime>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:user uuid="07c7f32f7d494f29b99afe2b074d0f68">tempest-AttachSCSIVolumeTestJSON-139358515-project-member</nova:user>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:project uuid="4004f4fd97dd4d0e8e83dd715ffb8e9c">tempest-AttachSCSIVolumeTestJSON-139358515</nova:project>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="ca49c32b-cb06-40a8-be71-35eeb05e9ca2"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <nova:port uuid="2e934383-f346-4336-b1b8-e866fb05bef3">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <entry name="serial">8b9831df-d3ee-436b-af4e-ad429c1c0227</entry>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <entry name="uuid">8b9831df-d3ee-436b-af4e-ad429c1c0227</entry>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8b9831df-d3ee-436b-af4e-ad429c1c0227_disk">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <target dev="sda" bus="scsi"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <address type="drive" controller="0" unit="0"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8b9831df-d3ee-436b-af4e-ad429c1c0227_disk.config">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <target dev="sdb" bus="scsi"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <address type="drive" controller="0" unit="1"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="scsi" index="0" model="virtio-scsi"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:e8:6d:dc"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <target dev="tap2e934383-f3"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </interface>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/console.log" append="off"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:57:48 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:57:48 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:57:48 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:57:48 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.323 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Preparing to wait for external event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.324 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.324 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.324 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.325 233728 DEBUG nova.virt.libvirt.vif [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1964225932',display_name='tempest-AttachSCSIVolumeTestJSON-server-1964225932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1964225932',id=28,image_ref='ca49c32b-cb06-40a8-be71-35eeb05e9ca2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzELH7XLAV22uc90G7zhFInp2AEhOf5gIszA4btoNbMLg4GwRtbFwOgsjAdQ48WfVVFvrsN82F0fsoTLA3D6IXsd/RB/ZhS2LC3RCEPZmLU5GMeM2SAv5CXk19zhMmOUw==',key_name='tempest-keypair-80928421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4004f4fd97dd4d0e8e83dd715ffb8e9c',ramdisk_id='',reservation_id='r-3rqc177r',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca49c32b-cb06-40a8-be71-35eeb05e9ca2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-139358515',owner_user_name='tempest-AttachSCSIVolumeTestJSON-139358515-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='07c7f32f7d494f29b99afe2b074d0f68',uuid=8b9831df-d3ee-436b-af4e-ad429c1c0227,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.325 233728 DEBUG nova.network.os_vif_util [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Converting VIF {"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.325 233728 DEBUG nova.network.os_vif_util [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.326 233728 DEBUG os_vif [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.326 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.327 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.327 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.331 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.331 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e934383-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.332 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e934383-f3, col_values=(('external_ids', {'iface-id': '2e934383-f346-4336-b1b8-e866fb05bef3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:6d:dc', 'vm-uuid': '8b9831df-d3ee-436b-af4e-ad429c1c0227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.333 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:48 np0005539552 NetworkManager[48926]: <info>  [1764403068.3344] manager: (tap2e934383-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.336 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.341 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.341 233728 INFO os_vif [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3')#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.406 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.406 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.406 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No VIF found with MAC fa:16:3e:e8:6d:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.407 233728 INFO nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Using config drive#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.433 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.946 233728 INFO nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Creating config drive at /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/disk.config#033[00m
Nov 29 02:57:48 np0005539552 nova_compute[233724]: 2025-11-29 07:57:48.951 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqs_wx54s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.082 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqs_wx54s" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.117 233728 DEBUG nova.storage.rbd_utils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] rbd image 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.122 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/disk.config 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.278 233728 DEBUG nova.network.neutron [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updated VIF entry in instance network info cache for port 2e934383-f346-4336-b1b8-e866fb05bef3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.280 233728 DEBUG nova.network.neutron [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updating instance_info_cache with network_info: [{"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.292 233728 DEBUG oslo_concurrency.processutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/disk.config 8b9831df-d3ee-436b-af4e-ad429c1c0227_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.292 233728 INFO nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Deleting local config drive /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227/disk.config because it was imported into RBD.#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.315 233728 DEBUG oslo_concurrency.lockutils [req-35d92b06-4dfc-4402-b6dc-8b6ae745c10d req-690f042e-ca59-48a3-9a91-84c78e3f5a99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:49 np0005539552 kernel: tap2e934383-f3: entered promiscuous mode
Nov 29 02:57:49 np0005539552 NetworkManager[48926]: <info>  [1764403069.3387] manager: (tap2e934383-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 02:57:49 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:49Z|00109|binding|INFO|Claiming lport 2e934383-f346-4336-b1b8-e866fb05bef3 for this chassis.
Nov 29 02:57:49 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:49Z|00110|binding|INFO|2e934383-f346-4336-b1b8-e866fb05bef3: Claiming fa:16:3e:e8:6d:dc 10.100.0.5
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.339 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.344 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.346 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.352 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:6d:dc 10.100.0.5'], port_security=['fa:16:3e:e8:6d:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8b9831df-d3ee-436b-af4e-ad429c1c0227', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75c19d22-1ac2-46dc-b079-76d980a382ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4004f4fd97dd4d0e8e83dd715ffb8e9c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd8d6076-3342-42f8-a8a3-018d4d562660', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39b1f3a8-26a7-4a17-90f2-a717db9d272a, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2e934383-f346-4336-b1b8-e866fb05bef3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.353 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2e934383-f346-4336-b1b8-e866fb05bef3 in datapath 75c19d22-1ac2-46dc-b079-76d980a382ed bound to our chassis#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.355 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75c19d22-1ac2-46dc-b079-76d980a382ed#033[00m
Nov 29 02:57:49 np0005539552 systemd-udevd[247306]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.366 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[12ae69cc-c65d-4d1f-b9f6-8635ae04b645]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.368 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75c19d22-11 in ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.370 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75c19d22-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.370 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b3169f71-84c4-4285-91cd-ac0a06a1af59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.370 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0022464a-9afc-4515-895a-e61f6c2a3bb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 systemd-machined[196379]: New machine qemu-9-instance-0000001c.
Nov 29 02:57:49 np0005539552 NetworkManager[48926]: <info>  [1764403069.3839] device (tap2e934383-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.382 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[5a980dae-5b5f-4adc-84ad-ad7e6da1a3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 NetworkManager[48926]: <info>  [1764403069.3847] device (tap2e934383-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:57:49 np0005539552 systemd[1]: Started Virtual Machine qemu-9-instance-0000001c.
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.407 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:49Z|00111|binding|INFO|Setting lport 2e934383-f346-4336-b1b8-e866fb05bef3 ovn-installed in OVS
Nov 29 02:57:49 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:49Z|00112|binding|INFO|Setting lport 2e934383-f346-4336-b1b8-e866fb05bef3 up in Southbound
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.411 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.412 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[388d1b69-e855-4fa8-9632-3b54240bed03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.435 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c95ec8ab-e4a5-4cf1-a818-41c0a75b1fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.440 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfee682-195b-446d-9a68-63a6792572d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 NetworkManager[48926]: <info>  [1764403069.4417] manager: (tap75c19d22-10): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.464 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6c51cb-325d-4c57-bc64-0898282894d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.467 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9f033ee2-1e1a-448a-af04-bb4eab921bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.485 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 NetworkManager[48926]: <info>  [1764403069.4889] device (tap75c19d22-10): carrier: link connected
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.493 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6b9350-72a8-463b-8f4e-8f23061d2c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.510 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c510304f-77d5-4db4-9c6b-ed70b7f40bb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75c19d22-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611517, 'reachable_time': 21637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247340, 'error': None, 'target': 'ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.523 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5fbf0f-5843-45c8-ae45-fe1591284fdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:d619'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611517, 'tstamp': 611517}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247341, 'error': None, 'target': 'ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.537 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0010b349-6a6a-4c0e-9c54-246feeaebd37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75c19d22-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611517, 'reachable_time': 21637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247342, 'error': None, 'target': 'ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.561 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[28769b9c-72d0-4dc9-9402-461d27ad5d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.619 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3da6acae-48c7-4f8f-80a5-db833e3431be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.620 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75c19d22-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.620 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.621 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75c19d22-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:49 np0005539552 kernel: tap75c19d22-10: entered promiscuous mode
Nov 29 02:57:49 np0005539552 NetworkManager[48926]: <info>  [1764403069.6235] manager: (tap75c19d22-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.623 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.625 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75c19d22-10, col_values=(('external_ids', {'iface-id': '4bb1119c-a842-4803-a0c1-b7bb6d404a37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:49 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:49Z|00113|binding|INFO|Releasing lport 4bb1119c-a842-4803-a0c1-b7bb6d404a37 from this chassis (sb_readonly=0)
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.629 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75c19d22-1ac2-46dc-b079-76d980a382ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75c19d22-1ac2-46dc-b079-76d980a382ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.631 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b8ab8a-02b5-450a-bd2e-44c64b2d6be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.632 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-75c19d22-1ac2-46dc-b079-76d980a382ed
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/75c19d22-1ac2-46dc-b079-76d980a382ed.pid.haproxy
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 75c19d22-1ac2-46dc-b079-76d980a382ed
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:57:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:57:49.632 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed', 'env', 'PROCESS_TAG=haproxy-75c19d22-1ac2-46dc-b079-76d980a382ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75c19d22-1ac2-46dc-b079-76d980a382ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.643 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.683 233728 DEBUG nova.compute.manager [req-dfb5300d-780d-416e-b045-5cb756874320 req-65223b31-61a1-4975-ab81-178a77b7b845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.684 233728 DEBUG oslo_concurrency.lockutils [req-dfb5300d-780d-416e-b045-5cb756874320 req-65223b31-61a1-4975-ab81-178a77b7b845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.685 233728 DEBUG oslo_concurrency.lockutils [req-dfb5300d-780d-416e-b045-5cb756874320 req-65223b31-61a1-4975-ab81-178a77b7b845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.685 233728 DEBUG oslo_concurrency.lockutils [req-dfb5300d-780d-416e-b045-5cb756874320 req-65223b31-61a1-4975-ab81-178a77b7b845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:49 np0005539552 nova_compute[233724]: 2025-11-29 07:57:49.685 233728 DEBUG nova.compute.manager [req-dfb5300d-780d-416e-b045-5cb756874320 req-65223b31-61a1-4975-ab81-178a77b7b845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Processing event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:57:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:49.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:49.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:50 np0005539552 podman[247410]: 2025-11-29 07:57:50.046423273 +0000 UTC m=+0.076416815 container create 815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:57:50 np0005539552 systemd[1]: Started libpod-conmon-815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300.scope.
Nov 29 02:57:50 np0005539552 podman[247410]: 2025-11-29 07:57:50.006184801 +0000 UTC m=+0.036178363 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.095 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.097 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403070.0946023, 8b9831df-d3ee-436b-af4e-ad429c1c0227 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.097 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:50 np0005539552 systemd[1]: Started libcrun container.
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.101 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.105 233728 INFO nova.virt.libvirt.driver [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Instance spawned successfully.#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.105 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:57:50 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93a3760e5703585b84a08902407dc8fb38aa6dbcab5d2ac598c00175fa61bac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.108 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.109 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.110 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.110 233728 DEBUG nova.virt.libvirt.driver [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:50 np0005539552 podman[247410]: 2025-11-29 07:57:50.119402845 +0000 UTC m=+0.149396427 container init 815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:57:50 np0005539552 podman[247410]: 2025-11-29 07:57:50.124865593 +0000 UTC m=+0.154859145 container start 815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 02:57:50 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [NOTICE]   (247436) : New worker (247438) forked
Nov 29 02:57:50 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [NOTICE]   (247436) : Loading success.
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.161 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.163 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.210 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.211 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403070.0955222, 8b9831df-d3ee-436b-af4e-ad429c1c0227 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.211 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.224 233728 INFO nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Took 7.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.224 233728 DEBUG nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.251 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.257 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403070.1012828, 8b9831df-d3ee-436b-af4e-ad429c1c0227 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.258 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.289 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.292 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.302 233728 INFO nova.compute.manager [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Took 8.36 seconds to build instance.#033[00m
Nov 29 02:57:50 np0005539552 nova_compute[233724]: 2025-11-29 07:57:50.481 233728 DEBUG oslo_concurrency.lockutils [None req-ec0c12a5-fd5d-4823-8592-428eb0eb13d5 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:51.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:51 np0005539552 nova_compute[233724]: 2025-11-29 07:57:51.810 233728 DEBUG nova.compute.manager [req-a4271174-c75a-4f89-9675-121b839c7e15 req-41e2b947-dc0a-4f73-977f-190009b4292c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:51 np0005539552 nova_compute[233724]: 2025-11-29 07:57:51.810 233728 DEBUG oslo_concurrency.lockutils [req-a4271174-c75a-4f89-9675-121b839c7e15 req-41e2b947-dc0a-4f73-977f-190009b4292c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:51 np0005539552 nova_compute[233724]: 2025-11-29 07:57:51.811 233728 DEBUG oslo_concurrency.lockutils [req-a4271174-c75a-4f89-9675-121b839c7e15 req-41e2b947-dc0a-4f73-977f-190009b4292c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:51 np0005539552 nova_compute[233724]: 2025-11-29 07:57:51.811 233728 DEBUG oslo_concurrency.lockutils [req-a4271174-c75a-4f89-9675-121b839c7e15 req-41e2b947-dc0a-4f73-977f-190009b4292c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:51 np0005539552 nova_compute[233724]: 2025-11-29 07:57:51.811 233728 DEBUG nova.compute.manager [req-a4271174-c75a-4f89-9675-121b839c7e15 req-41e2b947-dc0a-4f73-977f-190009b4292c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] No waiting events found dispatching network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:57:51 np0005539552 nova_compute[233724]: 2025-11-29 07:57:51.812 233728 WARNING nova.compute.manager [req-a4271174-c75a-4f89-9675-121b839c7e15 req-41e2b947-dc0a-4f73-977f-190009b4292c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received unexpected event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:57:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:51.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.335 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:53 np0005539552 NetworkManager[48926]: <info>  [1764403073.6610] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 02:57:53 np0005539552 NetworkManager[48926]: <info>  [1764403073.6618] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 02:57:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.752 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:53 np0005539552 ovn_controller[133798]: 2025-11-29T07:57:53Z|00114|binding|INFO|Releasing lport 4bb1119c-a842-4803-a0c1-b7bb6d404a37 from this chassis (sb_readonly=0)
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.770 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.932 233728 DEBUG nova.compute.manager [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-changed-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.932 233728 DEBUG nova.compute.manager [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Refreshing instance network info cache due to event network-changed-2e934383-f346-4336-b1b8-e866fb05bef3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.932 233728 DEBUG oslo_concurrency.lockutils [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.932 233728 DEBUG oslo_concurrency.lockutils [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:53 np0005539552 nova_compute[233724]: 2025-11-29 07:57:53.933 233728 DEBUG nova.network.neutron [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Refreshing network info cache for port 2e934383-f346-4336-b1b8-e866fb05bef3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:57:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:53.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:54 np0005539552 nova_compute[233724]: 2025-11-29 07:57:54.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:55 np0005539552 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 02:57:55 np0005539552 systemd[246973]: Activating special unit Exit the Session...
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped target Main User Target.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped target Basic System.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped target Paths.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped target Sockets.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped target Timers.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:57:55 np0005539552 systemd[246973]: Closed D-Bus User Message Bus Socket.
Nov 29 02:57:55 np0005539552 systemd[246973]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:57:55 np0005539552 systemd[246973]: Removed slice User Application Slice.
Nov 29 02:57:55 np0005539552 systemd[246973]: Reached target Shutdown.
Nov 29 02:57:55 np0005539552 systemd[246973]: Finished Exit the Session.
Nov 29 02:57:55 np0005539552 systemd[246973]: Reached target Exit the Session.
Nov 29 02:57:55 np0005539552 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 02:57:55 np0005539552 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 02:57:55 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 02:57:55 np0005539552 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 02:57:55 np0005539552 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 02:57:55 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 02:57:55 np0005539552 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 02:57:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:55.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:55.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:56 np0005539552 nova_compute[233724]: 2025-11-29 07:57:56.904 233728 DEBUG nova.network.neutron [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updated VIF entry in instance network info cache for port 2e934383-f346-4336-b1b8-e866fb05bef3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:57:56 np0005539552 nova_compute[233724]: 2025-11-29 07:57:56.906 233728 DEBUG nova.network.neutron [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updating instance_info_cache with network_info: [{"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:56 np0005539552 nova_compute[233724]: 2025-11-29 07:57:56.930 233728 DEBUG oslo_concurrency.lockutils [req-53106bdb-8022-4eca-b892-422c39aac74f req-11f8978b-199d-4cfd-a580-fae2eb9262f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8b9831df-d3ee-436b-af4e-ad429c1c0227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:57.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:57.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539552 nova_compute[233724]: 2025-11-29 07:57:58.336 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:59 np0005539552 nova_compute[233724]: 2025-11-29 07:57:59.493 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:57:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:59.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:00.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:00 np0005539552 nova_compute[233724]: 2025-11-29 07:58:00.703 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:00 np0005539552 nova_compute[233724]: 2025-11-29 07:58:00.704 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:00 np0005539552 nova_compute[233724]: 2025-11-29 07:58:00.704 233728 DEBUG nova.network.neutron [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:00 np0005539552 nova_compute[233724]: 2025-11-29 07:58:00.841 233728 DEBUG nova.network.neutron [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:01 np0005539552 nova_compute[233724]: 2025-11-29 07:58:01.203 233728 DEBUG nova.network.neutron [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:01 np0005539552 nova_compute[233724]: 2025-11-29 07:58:01.219 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:01 np0005539552 nova_compute[233724]: 2025-11-29 07:58:01.326 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 02:58:01 np0005539552 nova_compute[233724]: 2025-11-29 07:58:01.328 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 02:58:01 np0005539552 nova_compute[233724]: 2025-11-29 07:58:01.328 233728 INFO nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Creating image(s)#033[00m
Nov 29 02:58:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:01 np0005539552 nova_compute[233724]: 2025-11-29 07:58:01.449 233728 DEBUG nova.storage.rbd_utils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] creating snapshot(nova-resize) on rbd image(fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:58:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:01.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 02:58:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:02.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.015 233728 DEBUG nova.objects.instance [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'trusted_certs' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.243 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.244 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Ensure instance console log exists: /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.245 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.246 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.246 233728 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.250 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.255 233728 WARNING nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.261 233728 DEBUG nova.virt.libvirt.host [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.262 233728 DEBUG nova.virt.libvirt.host [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.265 233728 DEBUG nova.virt.libvirt.host [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.266 233728 DEBUG nova.virt.libvirt.host [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.267 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.267 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.268 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.268 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.268 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.268 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.268 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.269 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.269 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.269 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.269 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.270 233728 DEBUG nova.virt.hardware [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.270 233728 DEBUG nova.objects.instance [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'vcpu_model' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.290 233728 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/627189197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.701 233728 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:02 np0005539552 nova_compute[233724]: 2025-11-29 07:58:02.740 233728 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:03 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 02:58:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2253107191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.251 233728 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.255 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <uuid>fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842</uuid>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <name>instance-0000001b</name>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <memory>196608</memory>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:name>tempest-MigrationsAdminTest-server-1174659691</nova:name>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:58:02</nova:creationTime>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.micro">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:memory>192</nova:memory>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <system>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <entry name="serial">fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842</entry>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <entry name="uuid">fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842</entry>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </system>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <os>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </os>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <features>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </features>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </clock>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  <devices>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/console.log" append="off"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </serial>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <video>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </video>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </rng>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 02:58:03 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 02:58:03 np0005539552 nova_compute[233724]:  </devices>
Nov 29 02:58:03 np0005539552 nova_compute[233724]: </domain>
Nov 29 02:58:03 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.317 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.317 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.318 233728 INFO nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Using config drive#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.357 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:03 np0005539552 podman[247596]: 2025-11-29 07:58:03.365706088 +0000 UTC m=+0.066735173 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:58:03 np0005539552 podman[247595]: 2025-11-29 07:58:03.379869782 +0000 UTC m=+0.083182649 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:58:03 np0005539552 podman[247597]: 2025-11-29 07:58:03.400652246 +0000 UTC m=+0.099979985 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:58:03 np0005539552 systemd-machined[196379]: New machine qemu-10-instance-0000001b.
Nov 29 02:58:03 np0005539552 systemd[1]: Started Virtual Machine qemu-10-instance-0000001b.
Nov 29 02:58:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:03.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.849 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403083.8493803, fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.850 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.852 233728 DEBUG nova.compute.manager [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.855 233728 INFO nova.virt.libvirt.driver [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance running successfully.#033[00m
Nov 29 02:58:03 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.857 233728 DEBUG nova.virt.libvirt.guest [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.858 233728 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.871 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.881 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.930 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.931 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403083.850447, fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.931 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] VM Started (Lifecycle Event)#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.969 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:03 np0005539552 nova_compute[233724]: 2025-11-29 07:58:03.974 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:04 np0005539552 nova_compute[233724]: 2025-11-29 07:58:04.497 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:05 np0005539552 ovn_controller[133798]: 2025-11-29T07:58:05Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:6d:dc 10.100.0.5
Nov 29 02:58:05 np0005539552 ovn_controller[133798]: 2025-11-29T07:58:05Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:6d:dc 10.100.0.5
Nov 29 02:58:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:05.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:07.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 02:58:08 np0005539552 nova_compute[233724]: 2025-11-29 07:58:08.360 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:09 np0005539552 nova_compute[233724]: 2025-11-29 07:58:09.175 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:09 np0005539552 nova_compute[233724]: 2025-11-29 07:58:09.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:09.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:09 np0005539552 nova_compute[233724]: 2025-11-29 07:58:09.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:09 np0005539552 nova_compute[233724]: 2025-11-29 07:58:09.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:58:09 np0005539552 nova_compute[233724]: 2025-11-29 07:58:09.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:58:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:10.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.107 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.108 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.108 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.109 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.372 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.915 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.933 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.933 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:10 np0005539552 nova_compute[233724]: 2025-11-29 07:58:10.934 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:58:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:11.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.945 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.945 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.965 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.965 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.965 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.966 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:58:11 np0005539552 nova_compute[233724]: 2025-11-29 07:58:11.966 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:12.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3383832494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.423 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.638 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.638 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.642 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.643 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.648 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.649 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.867 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.868 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4269MB free_disk=20.810192108154297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.868 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.868 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 9eb89c30-3f33-4a7c-ae19-8312a2522b82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 8b9831df-d3ee-436b-af4e-ad429c1c0227 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:58:12 np0005539552 nova_compute[233724]: 2025-11-29 07:58:12.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.039 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.363 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.464 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.469 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.488 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.513 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:58:13 np0005539552 nova_compute[233724]: 2025-11-29 07:58:13.514 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:14.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:14 np0005539552 nova_compute[233724]: 2025-11-29 07:58:14.503 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:15.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:16.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:17.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:58:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:18.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:58:18 np0005539552 nova_compute[233724]: 2025-11-29 07:58:18.367 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:19 np0005539552 nova_compute[233724]: 2025-11-29 07:58:19.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:19.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:20.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:20.607 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:20.607 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:20.608 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:21.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:22.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:23 np0005539552 nova_compute[233724]: 2025-11-29 07:58:23.370 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:23.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:24.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:24 np0005539552 nova_compute[233724]: 2025-11-29 07:58:24.506 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:25.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:26 np0005539552 podman[248013]: 2025-11-29 07:58:26.710370172 +0000 UTC m=+0.060850113 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:58:26 np0005539552 podman[248013]: 2025-11-29 07:58:26.813774648 +0000 UTC m=+0.164254579 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:58:27 np0005539552 podman[248166]: 2025-11-29 07:58:27.392427058 +0000 UTC m=+0.049514695 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:58:27 np0005539552 podman[248166]: 2025-11-29 07:58:27.430953644 +0000 UTC m=+0.088041291 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 02:58:27 np0005539552 podman[248232]: 2025-11-29 07:58:27.609986444 +0000 UTC m=+0.048668132 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, io.openshift.expose-services=, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 02:58:27 np0005539552 podman[248232]: 2025-11-29 07:58:27.623958724 +0000 UTC m=+0.062640382 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, version=2.2.4, release=1793)
Nov 29 02:58:27 np0005539552 nova_compute[233724]: 2025-11-29 07:58:27.735 233728 DEBUG oslo_concurrency.lockutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:27 np0005539552 nova_compute[233724]: 2025-11-29 07:58:27.737 233728 DEBUG oslo_concurrency.lockutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:27 np0005539552 nova_compute[233724]: 2025-11-29 07:58:27.774 233728 DEBUG nova.objects.instance [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lazy-loading 'flavor' on Instance uuid 8b9831df-d3ee-436b-af4e-ad429c1c0227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:27 np0005539552 nova_compute[233724]: 2025-11-29 07:58:27.814 233728 DEBUG oslo_concurrency.lockutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.162 233728 DEBUG oslo_concurrency.lockutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.163 233728 DEBUG oslo_concurrency.lockutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.163 233728 INFO nova.compute.manager [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Attaching volume ad8203ea-9ab4-445c-8faf-cf62075ec75c to /dev/sdc#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.373 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.414 233728 DEBUG os_brick.utils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.416 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.429 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.429 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[03299938-1ba2-4cf6-8983-bf0b0d6cb5e6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.430 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.443 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.444 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c2d7ac-af77-4aa6-8211-d767beb87269]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.445 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.459 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.459 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2319541-cb12-48cb-91b4-ec963e532ced]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.461 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e61d1d1b-9c08-4da5-8660-9fe6d5cec75b]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.462 233728 DEBUG oslo_concurrency.processutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.486 233728 DEBUG oslo_concurrency.processutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.488 233728 DEBUG os_brick.initiator.connectors.lightos [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.488 233728 DEBUG os_brick.initiator.connectors.lightos [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.488 233728 DEBUG os_brick.initiator.connectors.lightos [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.489 233728 DEBUG os_brick.utils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:58:28 np0005539552 nova_compute[233724]: 2025-11-29 07:58:28.489 233728 DEBUG nova.virt.block_device [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updating existing volume attachment record: 2d220b09-0115-442f-b476-72c5d1b50475 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:58:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3423223151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:29 np0005539552 nova_compute[233724]: 2025-11-29 07:58:29.508 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:29 np0005539552 nova_compute[233724]: 2025-11-29 07:58:29.647 233728 DEBUG nova.objects.instance [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lazy-loading 'flavor' on Instance uuid 8b9831df-d3ee-436b-af4e-ad429c1c0227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:29 np0005539552 nova_compute[233724]: 2025-11-29 07:58:29.678 233728 DEBUG nova.virt.libvirt.guest [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ad8203ea-9ab4-445c-8faf-cf62075ec75c">
Nov 29 02:58:29 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  </source>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 02:58:29 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  </auth>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  <target dev="sdc" bus="scsi"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  <serial>ad8203ea-9ab4-445c-8faf-cf62075ec75c</serial>
Nov 29 02:58:29 np0005539552 nova_compute[233724]:  <address type="drive" controller="0" unit="2"/>
Nov 29 02:58:29 np0005539552 nova_compute[233724]: </disk>
Nov 29 02:58:29 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:58:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:29.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:30.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:32.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:33 np0005539552 nova_compute[233724]: 2025-11-29 07:58:33.162 233728 DEBUG nova.virt.libvirt.driver [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:33 np0005539552 nova_compute[233724]: 2025-11-29 07:58:33.162 233728 DEBUG nova.virt.libvirt.driver [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:33 np0005539552 nova_compute[233724]: 2025-11-29 07:58:33.163 233728 DEBUG nova.virt.libvirt.driver [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No BDM found with device name sdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:33 np0005539552 nova_compute[233724]: 2025-11-29 07:58:33.164 233728 DEBUG nova.virt.libvirt.driver [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] No VIF found with MAC fa:16:3e:e8:6d:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:58:33 np0005539552 nova_compute[233724]: 2025-11-29 07:58:33.376 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:33 np0005539552 nova_compute[233724]: 2025-11-29 07:58:33.499 233728 DEBUG oslo_concurrency.lockutils [None req-08cc25fa-2cfd-4f9f-a94c-4bc2bc8a3b36 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:58:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:58:33 np0005539552 podman[248346]: 2025-11-29 07:58:33.993602858 +0000 UTC m=+0.076460396 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:58:34 np0005539552 podman[248347]: 2025-11-29 07:58:34.044388927 +0000 UTC m=+0.125272261 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 29 02:58:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539552 podman[248348]: 2025-11-29 07:58:34.065640434 +0000 UTC m=+0.147315640 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:58:34 np0005539552 nova_compute[233724]: 2025-11-29 07:58:34.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:34 np0005539552 nova_compute[233724]: 2025-11-29 07:58:34.967 233728 DEBUG oslo_concurrency.lockutils [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:34 np0005539552 nova_compute[233724]: 2025-11-29 07:58:34.968 233728 DEBUG oslo_concurrency.lockutils [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:34 np0005539552 nova_compute[233724]: 2025-11-29 07:58:34.988 233728 INFO nova.compute.manager [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Detaching volume ad8203ea-9ab4-445c-8faf-cf62075ec75c#033[00m
Nov 29 02:58:35 np0005539552 nova_compute[233724]: 2025-11-29 07:58:35.258 233728 INFO nova.virt.block_device [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Attempting to driver detach volume ad8203ea-9ab4-445c-8faf-cf62075ec75c from mountpoint /dev/sdc#033[00m
Nov 29 02:58:35 np0005539552 nova_compute[233724]: 2025-11-29 07:58:35.269 233728 DEBUG nova.virt.libvirt.driver [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Attempting to detach device sdc from instance 8b9831df-d3ee-436b-af4e-ad429c1c0227 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:58:35 np0005539552 nova_compute[233724]: 2025-11-29 07:58:35.270 233728 DEBUG nova.virt.libvirt.guest [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ad8203ea-9ab4-445c-8faf-cf62075ec75c">
Nov 29 02:58:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  </source>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <target dev="sdc" bus="scsi"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <serial>ad8203ea-9ab4-445c-8faf-cf62075ec75c</serial>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]: </disk>
Nov 29 02:58:35 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:58:35 np0005539552 nova_compute[233724]: 2025-11-29 07:58:35.282 233728 INFO nova.virt.libvirt.driver [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Successfully detached device sdc from instance 8b9831df-d3ee-436b-af4e-ad429c1c0227 from the persistent domain config.#033[00m
Nov 29 02:58:35 np0005539552 nova_compute[233724]: 2025-11-29 07:58:35.283 233728 DEBUG nova.virt.libvirt.driver [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] (1/8): Attempting to detach device sdc with device alias scsi0-0-0-2 from instance 8b9831df-d3ee-436b-af4e-ad429c1c0227 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:58:35 np0005539552 nova_compute[233724]: 2025-11-29 07:58:35.284 233728 DEBUG nova.virt.libvirt.guest [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ad8203ea-9ab4-445c-8faf-cf62075ec75c">
Nov 29 02:58:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  </source>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <target dev="sdc" bus="scsi"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <serial>ad8203ea-9ab4-445c-8faf-cf62075ec75c</serial>
Nov 29 02:58:35 np0005539552 nova_compute[233724]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Nov 29 02:58:35 np0005539552 nova_compute[233724]: </disk>
Nov 29 02:58:35 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:58:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:36 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:37 np0005539552 nova_compute[233724]: 2025-11-29 07:58:37.222 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764403117.22222, 8b9831df-d3ee-436b-af4e-ad429c1c0227 => scsi0-0-0-2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:58:37 np0005539552 nova_compute[233724]: 2025-11-29 07:58:37.223 233728 DEBUG nova.virt.libvirt.driver [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Start waiting for the detach event from libvirt for device sdc with device alias scsi0-0-0-2 for instance 8b9831df-d3ee-436b-af4e-ad429c1c0227 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:58:37 np0005539552 nova_compute[233724]: 2025-11-29 07:58:37.225 233728 INFO nova.virt.libvirt.driver [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Successfully detached device sdc from instance 8b9831df-d3ee-436b-af4e-ad429c1c0227 from the live domain config.#033[00m
Nov 29 02:58:37 np0005539552 nova_compute[233724]: 2025-11-29 07:58:37.505 233728 DEBUG nova.objects.instance [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lazy-loading 'flavor' on Instance uuid 8b9831df-d3ee-436b-af4e-ad429c1c0227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:37 np0005539552 nova_compute[233724]: 2025-11-29 07:58:37.546 233728 DEBUG oslo_concurrency.lockutils [None req-9869f21a-dedb-4d4b-9b4b-c7b17ad1b677 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:37.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:58:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:58:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:58:38 np0005539552 nova_compute[233724]: 2025-11-29 07:58:38.379 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.609 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.610 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.610 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.610 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.610 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.611 233728 INFO nova.compute.manager [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Terminating instance#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.612 233728 DEBUG nova.compute.manager [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:58:39 np0005539552 kernel: tap2e934383-f3 (unregistering): left promiscuous mode
Nov 29 02:58:39 np0005539552 NetworkManager[48926]: <info>  [1764403119.6753] device (tap2e934383-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:58:39 np0005539552 ovn_controller[133798]: 2025-11-29T07:58:39Z|00115|binding|INFO|Releasing lport 2e934383-f346-4336-b1b8-e866fb05bef3 from this chassis (sb_readonly=0)
Nov 29 02:58:39 np0005539552 ovn_controller[133798]: 2025-11-29T07:58:39Z|00116|binding|INFO|Setting lport 2e934383-f346-4336-b1b8-e866fb05bef3 down in Southbound
Nov 29 02:58:39 np0005539552 ovn_controller[133798]: 2025-11-29T07:58:39Z|00117|binding|INFO|Removing iface tap2e934383-f3 ovn-installed in OVS
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.686 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.692 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:6d:dc 10.100.0.5'], port_security=['fa:16:3e:e8:6d:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8b9831df-d3ee-436b-af4e-ad429c1c0227', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75c19d22-1ac2-46dc-b079-76d980a382ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4004f4fd97dd4d0e8e83dd715ffb8e9c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd8d6076-3342-42f8-a8a3-018d4d562660', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39b1f3a8-26a7-4a17-90f2-a717db9d272a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2e934383-f346-4336-b1b8-e866fb05bef3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.694 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2e934383-f346-4336-b1b8-e866fb05bef3 in datapath 75c19d22-1ac2-46dc-b079-76d980a382ed unbound from our chassis#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.695 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75c19d22-1ac2-46dc-b079-76d980a382ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.697 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[70254c4e-d9eb-4ec0-aded-f22bbe79b5e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.698 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed namespace which is not needed anymore#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.704 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 29 02:58:39 np0005539552 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000001c.scope: Consumed 15.271s CPU time.
Nov 29 02:58:39 np0005539552 systemd-machined[196379]: Machine qemu-9-instance-0000001c terminated.
Nov 29 02:58:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:39 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [NOTICE]   (247436) : haproxy version is 2.8.14-c23fe91
Nov 29 02:58:39 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [NOTICE]   (247436) : path to executable is /usr/sbin/haproxy
Nov 29 02:58:39 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [WARNING]  (247436) : Exiting Master process...
Nov 29 02:58:39 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [ALERT]    (247436) : Current worker (247438) exited with code 143 (Terminated)
Nov 29 02:58:39 np0005539552 neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed[247432]: [WARNING]  (247436) : All workers exited. Exiting... (0)
Nov 29 02:58:39 np0005539552 systemd[1]: libpod-815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300.scope: Deactivated successfully.
Nov 29 02:58:39 np0005539552 podman[248571]: 2025-11-29 07:58:39.838404756 +0000 UTC m=+0.047307495 container died 815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.844 233728 INFO nova.virt.libvirt.driver [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Instance destroyed successfully.#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.845 233728 DEBUG nova.objects.instance [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lazy-loading 'resources' on Instance uuid 8b9831df-d3ee-436b-af4e-ad429c1c0227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300-userdata-shm.mount: Deactivated successfully.
Nov 29 02:58:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d93a3760e5703585b84a08902407dc8fb38aa6dbcab5d2ac598c00175fa61bac-merged.mount: Deactivated successfully.
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.874 233728 DEBUG nova.virt.libvirt.vif [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1964225932',display_name='tempest-AttachSCSIVolumeTestJSON-server-1964225932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1964225932',id=28,image_ref='ca49c32b-cb06-40a8-be71-35eeb05e9ca2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzELH7XLAV22uc90G7zhFInp2AEhOf5gIszA4btoNbMLg4GwRtbFwOgsjAdQ48WfVVFvrsN82F0fsoTLA3D6IXsd/RB/ZhS2LC3RCEPZmLU5GMeM2SAv5CXk19zhMmOUw==',key_name='tempest-keypair-80928421',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:57:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4004f4fd97dd4d0e8e83dd715ffb8e9c',ramdisk_id='',reservation_id='r-3rqc177r',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca49c32b-cb06-40a8-be71-35eeb05e9ca2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_scsi_model='virtio-scsi',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachSCSIVolumeTestJSON-139358515',owner_user_name='tempest-AttachSCSIVolumeTestJSON-139358515-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:57:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='07c7f32f7d494f29b99afe2b074d0f68',uuid=8b9831df-d3ee-436b-af4e-ad429c1c0227,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.876 233728 DEBUG nova.network.os_vif_util [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Converting VIF {"id": "2e934383-f346-4336-b1b8-e866fb05bef3", "address": "fa:16:3e:e8:6d:dc", "network": {"id": "75c19d22-1ac2-46dc-b079-76d980a382ed", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-65381912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4004f4fd97dd4d0e8e83dd715ffb8e9c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e934383-f3", "ovs_interfaceid": "2e934383-f346-4336-b1b8-e866fb05bef3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.876 233728 DEBUG nova.network.os_vif_util [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.877 233728 DEBUG os_vif [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:58:39 np0005539552 podman[248571]: 2025-11-29 07:58:39.878750652 +0000 UTC m=+0.087653391 container cleanup 815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.879 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.879 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e934383-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:39 np0005539552 systemd[1]: libpod-conmon-815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300.scope: Deactivated successfully.
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.941 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.945 233728 INFO os_vif [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:6d:dc,bridge_name='br-int',has_traffic_filtering=True,id=2e934383-f346-4336-b1b8-e866fb05bef3,network=Network(75c19d22-1ac2-46dc-b079-76d980a382ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e934383-f3')#033[00m
Nov 29 02:58:39 np0005539552 podman[248611]: 2025-11-29 07:58:39.946932473 +0000 UTC m=+0.047706667 container remove 815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.953 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[52817ae5-6eb7-4739-8474-cc560c4a1b1f]: (4, ('Sat Nov 29 07:58:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed (815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300)\n815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300\nSat Nov 29 07:58:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed (815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300)\n815c17f6228eb255ea960d102666eb6014bf0ec1c837b68c4018b79efc9da300\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.955 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5ae4e8-ee73-45b0-8574-449e799fdc6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.956 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75c19d22-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:39 np0005539552 kernel: tap75c19d22-10: left promiscuous mode
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.965 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 nova_compute[233724]: 2025-11-29 07:58:39.970 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.973 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7f011b-0373-4e38-8f83-1f8fb9df479e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.992 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cb75281a-d9bf-418c-9aa7-2e6528fd7379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:39.993 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[24173686-995b-427f-a608-6c64fe5c82da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:40.009 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[77a1db55-6576-4e47-8742-4fdde1b491b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611512, 'reachable_time': 25191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248644, 'error': None, 'target': 'ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:40.011 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75c19d22-1ac2-46dc-b079-76d980a382ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:58:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:40.011 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b085a02e-be51-4744-81c8-19b9d87878d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:40 np0005539552 systemd[1]: run-netns-ovnmeta\x2d75c19d22\x2d1ac2\x2d46dc\x2db079\x2d76d980a382ed.mount: Deactivated successfully.
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.016 233728 DEBUG nova.compute.manager [req-e5f2ad9e-5b33-4324-ba8d-1b58002772f6 req-879e607a-d6e0-4b32-aa1d-f02d76a44929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-vif-unplugged-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.016 233728 DEBUG oslo_concurrency.lockutils [req-e5f2ad9e-5b33-4324-ba8d-1b58002772f6 req-879e607a-d6e0-4b32-aa1d-f02d76a44929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.017 233728 DEBUG oslo_concurrency.lockutils [req-e5f2ad9e-5b33-4324-ba8d-1b58002772f6 req-879e607a-d6e0-4b32-aa1d-f02d76a44929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.017 233728 DEBUG oslo_concurrency.lockutils [req-e5f2ad9e-5b33-4324-ba8d-1b58002772f6 req-879e607a-d6e0-4b32-aa1d-f02d76a44929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.017 233728 DEBUG nova.compute.manager [req-e5f2ad9e-5b33-4324-ba8d-1b58002772f6 req-879e607a-d6e0-4b32-aa1d-f02d76a44929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] No waiting events found dispatching network-vif-unplugged-2e934383-f346-4336-b1b8-e866fb05bef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.018 233728 DEBUG nova.compute.manager [req-e5f2ad9e-5b33-4324-ba8d-1b58002772f6 req-879e607a-d6e0-4b32-aa1d-f02d76a44929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-vif-unplugged-2e934383-f346-4336-b1b8-e866fb05bef3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:58:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:40.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.356 233728 INFO nova.virt.libvirt.driver [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Deleting instance files /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227_del#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.357 233728 INFO nova.virt.libvirt.driver [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Deletion of /var/lib/nova/instances/8b9831df-d3ee-436b-af4e-ad429c1c0227_del complete#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.446 233728 INFO nova.compute.manager [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.447 233728 DEBUG oslo.service.loopingcall [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.447 233728 DEBUG nova.compute.manager [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:58:40 np0005539552 nova_compute[233724]: 2025-11-29 07:58:40.448 233728 DEBUG nova.network.neutron [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:58:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:42.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:43.509 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:43.510 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.617 233728 DEBUG nova.compute.manager [req-938e54e1-f20c-4c3b-9aa7-bb61f24e59b4 req-38e1b4df-1fe3-44ae-bbe6-bd71e9294586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.618 233728 DEBUG oslo_concurrency.lockutils [req-938e54e1-f20c-4c3b-9aa7-bb61f24e59b4 req-38e1b4df-1fe3-44ae-bbe6-bd71e9294586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.618 233728 DEBUG oslo_concurrency.lockutils [req-938e54e1-f20c-4c3b-9aa7-bb61f24e59b4 req-38e1b4df-1fe3-44ae-bbe6-bd71e9294586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.618 233728 DEBUG oslo_concurrency.lockutils [req-938e54e1-f20c-4c3b-9aa7-bb61f24e59b4 req-38e1b4df-1fe3-44ae-bbe6-bd71e9294586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.618 233728 DEBUG nova.compute.manager [req-938e54e1-f20c-4c3b-9aa7-bb61f24e59b4 req-38e1b4df-1fe3-44ae-bbe6-bd71e9294586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] No waiting events found dispatching network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.619 233728 WARNING nova.compute.manager [req-938e54e1-f20c-4c3b-9aa7-bb61f24e59b4 req-38e1b4df-1fe3-44ae-bbe6-bd71e9294586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received unexpected event network-vif-plugged-2e934383-f346-4336-b1b8-e866fb05bef3 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.683 233728 DEBUG nova.network.neutron [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.718 233728 INFO nova.compute.manager [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Took 3.27 seconds to deallocate network for instance.#033[00m
Nov 29 02:58:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:43.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.791 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.791 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.841 233728 DEBUG nova.compute.manager [req-56329939-a0e5-4cd2-854a-c04fee8a0b7d req-6e281c55-c954-4522-879b-25065acfb066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Received event network-vif-deleted-2e934383-f346-4336-b1b8-e866fb05bef3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:43 np0005539552 nova_compute[233724]: 2025-11-29 07:58:43.939 233728 DEBUG oslo_concurrency.processutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3721624685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.375 233728 DEBUG oslo_concurrency.processutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.381 233728 DEBUG nova.compute.provider_tree [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.434 233728 DEBUG nova.scheduler.client.report [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.542 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.589 233728 INFO nova.scheduler.client.report [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Deleted allocations for instance 8b9831df-d3ee-436b-af4e-ad429c1c0227#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.772 233728 DEBUG oslo_concurrency.lockutils [None req-32eb6dd9-1504-4f0e-a835-e30129d446fb 07c7f32f7d494f29b99afe2b074d0f68 4004f4fd97dd4d0e8e83dd715ffb8e9c - - default default] Lock "8b9831df-d3ee-436b-af4e-ad429c1c0227" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:44 np0005539552 nova_compute[233724]: 2025-11-29 07:58:44.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:45.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:58:47.512 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:47.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:49 np0005539552 nova_compute[233724]: 2025-11-29 07:58:49.515 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:49.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:49 np0005539552 nova_compute[233724]: 2025-11-29 07:58:49.944 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 02:58:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 02:58:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 02:58:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:54 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539552 nova_compute[233724]: 2025-11-29 07:58:54.518 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:54 np0005539552 nova_compute[233724]: 2025-11-29 07:58:54.841 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403119.8392782, 8b9831df-d3ee-436b-af4e-ad429c1c0227 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:54 np0005539552 nova_compute[233724]: 2025-11-29 07:58:54.841 233728 INFO nova.compute.manager [-] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:58:54 np0005539552 nova_compute[233724]: 2025-11-29 07:58:54.892 233728 DEBUG nova.compute.manager [None req-a5797d1e-e6b1-45f2-b526-09eb135c1b9a - - - - - -] [instance: 8b9831df-d3ee-436b-af4e-ad429c1c0227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:54 np0005539552 nova_compute[233724]: 2025-11-29 07:58:54.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 02:58:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:58.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:58:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.862805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138862924, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2610, "num_deletes": 510, "total_data_size": 5682776, "memory_usage": 5757664, "flush_reason": "Manual Compaction"}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138885498, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3350767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27151, "largest_seqno": 29756, "table_properties": {"data_size": 3340965, "index_size": 5592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 24738, "raw_average_key_size": 20, "raw_value_size": 3318885, "raw_average_value_size": 2693, "num_data_blocks": 243, "num_entries": 1232, "num_filter_entries": 1232, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402933, "oldest_key_time": 1764402933, "file_creation_time": 1764403138, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22752 microseconds, and 9773 cpu microseconds.
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.885569) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3350767 bytes OK
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.885596) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.887616) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.887666) EVENT_LOG_v1 {"time_micros": 1764403138887659, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.887686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5670188, prev total WAL file size 5686751, number of live WAL files 2.
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.889483) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3272KB)], [54(10MB)]
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138889522, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 14644453, "oldest_snapshot_seqno": -1}
Nov 29 02:58:58 np0005539552 nova_compute[233724]: 2025-11-29 07:58:58.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:58 np0005539552 nova_compute[233724]: 2025-11-29 07:58:58.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5990 keys, 10984874 bytes, temperature: kUnknown
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138967252, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10984874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10944320, "index_size": 24508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153424, "raw_average_key_size": 25, "raw_value_size": 10835844, "raw_average_value_size": 1808, "num_data_blocks": 992, "num_entries": 5990, "num_filter_entries": 5990, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403138, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.967516) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10984874 bytes
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.969473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.2 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.3) OK, records in: 7008, records dropped: 1018 output_compression: NoCompression
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.969495) EVENT_LOG_v1 {"time_micros": 1764403138969483, "job": 32, "event": "compaction_finished", "compaction_time_micros": 77812, "compaction_time_cpu_micros": 30240, "output_level": 6, "num_output_files": 1, "total_output_size": 10984874, "num_input_records": 7008, "num_output_records": 5990, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138970092, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138972372, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.889362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.972460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.972467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.972469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.972470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:58:58.972472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:59 np0005539552 nova_compute[233724]: 2025-11-29 07:58:59.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:59 np0005539552 nova_compute[233724]: 2025-11-29 07:58:59.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:00.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:02.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:04.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:04.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539552 nova_compute[233724]: 2025-11-29 07:59:04.521 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:04 np0005539552 podman[248786]: 2025-11-29 07:59:04.971804546 +0000 UTC m=+0.054030208 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:59:04 np0005539552 podman[248785]: 2025-11-29 07:59:04.971836737 +0000 UTC m=+0.058446148 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 02:59:04 np0005539552 nova_compute[233724]: 2025-11-29 07:59:04.982 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:04 np0005539552 podman[248787]: 2025-11-29 07:59:04.995502939 +0000 UTC m=+0.078474731 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 29 02:59:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:06.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:08.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:08 np0005539552 nova_compute[233724]: 2025-11-29 07:59:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:09 np0005539552 nova_compute[233724]: 2025-11-29 07:59:09.523 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:09 np0005539552 nova_compute[233724]: 2025-11-29 07:59:09.941 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:09 np0005539552 nova_compute[233724]: 2025-11-29 07:59:09.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:10.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:10 np0005539552 nova_compute[233724]: 2025-11-29 07:59:10.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:10 np0005539552 nova_compute[233724]: 2025-11-29 07:59:10.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:59:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 02:59:11 np0005539552 nova_compute[233724]: 2025-11-29 07:59:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:11 np0005539552 nova_compute[233724]: 2025-11-29 07:59:11.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:59:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:12.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:12 np0005539552 nova_compute[233724]: 2025-11-29 07:59:12.820 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:12 np0005539552 nova_compute[233724]: 2025-11-29 07:59:12.820 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:12 np0005539552 nova_compute[233724]: 2025-11-29 07:59:12.821 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:59:12 np0005539552 nova_compute[233724]: 2025-11-29 07:59:12.973 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.078 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.117 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.117 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.117 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.144 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.145 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.145 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.145 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.145 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.525 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3415781807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.615 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.693 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.694 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.698 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.699 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.850 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.851 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4433MB free_disk=20.80707550048828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.852 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.852 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:14 np0005539552 nova_compute[233724]: 2025-11-29 07:59:14.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.036 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 9eb89c30-3f33-4a7c-ae19-8312a2522b82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.037 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.037 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.037 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.180 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/751628256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.632 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.639 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.653 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.670 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:59:15 np0005539552 nova_compute[233724]: 2025-11-29 07:59:15.670 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.178285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156178344, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 380, "num_deletes": 251, "total_data_size": 400892, "memory_usage": 409336, "flush_reason": "Manual Compaction"}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 29 02:59:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:16.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:16.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156183297, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 264658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29761, "largest_seqno": 30136, "table_properties": {"data_size": 262323, "index_size": 435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5973, "raw_average_key_size": 19, "raw_value_size": 257609, "raw_average_value_size": 825, "num_data_blocks": 19, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403138, "oldest_key_time": 1764403138, "file_creation_time": 1764403156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 5073 microseconds, and 1811 cpu microseconds.
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.183354) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 264658 bytes OK
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.183379) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.185362) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.185375) EVENT_LOG_v1 {"time_micros": 1764403156185370, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.185393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 398329, prev total WAL file size 398329, number of live WAL files 2.
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.185790) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(258KB)], [57(10MB)]
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156185831, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 11249532, "oldest_snapshot_seqno": -1}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5787 keys, 9280665 bytes, temperature: kUnknown
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156246320, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 9280665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9242868, "index_size": 22199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 149988, "raw_average_key_size": 25, "raw_value_size": 9139325, "raw_average_value_size": 1579, "num_data_blocks": 888, "num_entries": 5787, "num_filter_entries": 5787, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.246574) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9280665 bytes
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.267720) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.7 rd, 153.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(77.6) write-amplify(35.1) OK, records in: 6302, records dropped: 515 output_compression: NoCompression
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.267785) EVENT_LOG_v1 {"time_micros": 1764403156267772, "job": 34, "event": "compaction_finished", "compaction_time_micros": 60574, "compaction_time_cpu_micros": 20515, "output_level": 6, "num_output_files": 1, "total_output_size": 9280665, "num_input_records": 6302, "num_output_records": 5787, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156268268, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156270301, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.185709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.270426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.270431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.270432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.270434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-07:59:16.270435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:17 np0005539552 nova_compute[233724]: 2025-11-29 07:59:17.666 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:18.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:18 np0005539552 nova_compute[233724]: 2025-11-29 07:59:18.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:18 np0005539552 nova_compute[233724]: 2025-11-29 07:59:18.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:59:18 np0005539552 nova_compute[233724]: 2025-11-29 07:59:18.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:59:19 np0005539552 nova_compute[233724]: 2025-11-29 07:59:19.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:59:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040678217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:59:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:59:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040678217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:59:19 np0005539552 nova_compute[233724]: 2025-11-29 07:59:19.987 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:20.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:59:20.608 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:59:20.608 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:59:20.608 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 02:59:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:22.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539552 nova_compute[233724]: 2025-11-29 07:59:24.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:24.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539552 nova_compute[233724]: 2025-11-29 07:59:24.230 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539552 nova_compute[233724]: 2025-11-29 07:59:24.528 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:25 np0005539552 nova_compute[233724]: 2025-11-29 07:59:25.036 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:26.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:28.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:28.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:29 np0005539552 nova_compute[233724]: 2025-11-29 07:59:29.530 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:30 np0005539552 nova_compute[233724]: 2025-11-29 07:59:30.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:30.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:30.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 02:59:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:32.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:32.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:34.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:34.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539552 nova_compute[233724]: 2025-11-29 07:59:34.532 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:35 np0005539552 nova_compute[233724]: 2025-11-29 07:59:35.040 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:35 np0005539552 podman[249011]: 2025-11-29 07:59:35.974708965 +0000 UTC m=+0.062248821 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:59:35 np0005539552 podman[249012]: 2025-11-29 07:59:35.996058695 +0000 UTC m=+0.083869348 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 02:59:36 np0005539552 podman[249013]: 2025-11-29 07:59:36.000403593 +0000 UTC m=+0.085458652 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 02:59:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:36.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:38.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:39 np0005539552 nova_compute[233724]: 2025-11-29 07:59:39.533 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:40 np0005539552 nova_compute[233724]: 2025-11-29 07:59:40.041 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:40.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:42.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:42.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:44.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:44 np0005539552 nova_compute[233724]: 2025-11-29 07:59:44.534 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:45 np0005539552 nova_compute[233724]: 2025-11-29 07:59:45.042 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:46.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:46.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:48.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.613 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.613 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.641 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:59:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.839 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.839 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.846 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:59:48 np0005539552 nova_compute[233724]: 2025-11-29 07:59:48.846 233728 INFO nova.compute.claims [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 02:59:49 np0005539552 nova_compute[233724]: 2025-11-29 07:59:49.496 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:49 np0005539552 nova_compute[233724]: 2025-11-29 07:59:49.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/787703394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:49 np0005539552 nova_compute[233724]: 2025-11-29 07:59:49.949 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:49 np0005539552 nova_compute[233724]: 2025-11-29 07:59:49.956 233728 DEBUG nova.compute.provider_tree [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:49 np0005539552 nova_compute[233724]: 2025-11-29 07:59:49.974 233728 DEBUG nova.scheduler.client.report [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.005 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.006 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.044 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.063 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.064 233728 DEBUG nova.network.neutron [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.088 233728 INFO nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.115 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:59:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:50.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.303 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.304 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.304 233728 INFO nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Creating image(s)#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.329 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.360 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.387 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.390 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.447 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.448 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.449 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.449 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.477 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.481 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 683be290-1274-4e4b-89d9-ff49c5831ea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:50 np0005539552 nova_compute[233724]: 2025-11-29 07:59:50.503 233728 DEBUG nova.policy [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd26800adb7534fb6b85bcefeb114a77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8c3ebd49ecd442d8a0b6da0eeb15abcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:59:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 02:59:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:51 np0005539552 nova_compute[233724]: 2025-11-29 07:59:51.792 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 683be290-1274-4e4b-89d9-ff49c5831ea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:52.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.577 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] resizing rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.618 233728 DEBUG nova.network.neutron [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Successfully created port: e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.700 233728 DEBUG nova.objects.instance [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lazy-loading 'migration_context' on Instance uuid 683be290-1274-4e4b-89d9-ff49c5831ea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.737 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.737 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Ensure instance console log exists: /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.738 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.738 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:52 np0005539552 nova_compute[233724]: 2025-11-29 07:59:52.738 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:54 np0005539552 nova_compute[233724]: 2025-11-29 07:59:54.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:55 np0005539552 nova_compute[233724]: 2025-11-29 07:59:55.045 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:59:56.025 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 07:59:56.026 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:59:56 np0005539552 nova_compute[233724]: 2025-11-29 07:59:56.027 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:56 np0005539552 nova_compute[233724]: 2025-11-29 07:59:56.800 233728 DEBUG nova.network.neutron [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Successfully updated port: e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:59:56 np0005539552 nova_compute[233724]: 2025-11-29 07:59:56.871 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:56 np0005539552 nova_compute[233724]: 2025-11-29 07:59:56.871 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquired lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:56 np0005539552 nova_compute[233724]: 2025-11-29 07:59:56.872 233728 DEBUG nova.network.neutron [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:57 np0005539552 nova_compute[233724]: 2025-11-29 07:59:57.023 233728 DEBUG nova.compute.manager [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-changed-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:57 np0005539552 nova_compute[233724]: 2025-11-29 07:59:57.023 233728 DEBUG nova.compute.manager [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Refreshing instance network info cache due to event network-changed-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:59:57 np0005539552 nova_compute[233724]: 2025-11-29 07:59:57.024 233728 DEBUG oslo_concurrency.lockutils [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:57 np0005539552 nova_compute[233724]: 2025-11-29 07:59:57.164 233728 DEBUG nova.network.neutron [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:59:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:58.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:59:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 02:59:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:58.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.063 233728 DEBUG nova.network.neutron [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updating instance_info_cache with network_info: [{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.200 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Releasing lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.201 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Instance network_info: |[{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.202 233728 DEBUG oslo_concurrency.lockutils [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.202 233728 DEBUG nova.network.neutron [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Refreshing network info cache for port e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.205 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Start _get_guest_xml network_info=[{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.210 233728 WARNING nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.216 233728 DEBUG nova.virt.libvirt.host [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.217 233728 DEBUG nova.virt.libvirt.host [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.221 233728 DEBUG nova.virt.libvirt.host [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.221 233728 DEBUG nova.virt.libvirt.host [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.222 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.223 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.223 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.223 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.223 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.223 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.224 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.224 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.224 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.224 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.224 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.225 233728 DEBUG nova.virt.hardware [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.227 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.572 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2942587195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.761 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.791 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:59 np0005539552 nova_compute[233724]: 2025-11-29 07:59:59.796 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.046 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1776849645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.382 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.383 233728 DEBUG nova.virt.libvirt.vif [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1931673929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1931673929',id=33,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c3ebd49ecd442d8a0b6da0eeb15abcc',ramdisk_id='',reservation_id='r-otjbgvvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-570772750',owner_user_name='tempest-AttachInterfacesV270Test-570772750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:50Z,user_data=None,user_id='d26800adb7534fb6b85bcefeb114a77d',uuid=683be290-1274-4e4b-89d9-ff49c5831ea1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.384 233728 DEBUG nova.network.os_vif_util [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converting VIF {"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.384 233728 DEBUG nova.network.os_vif_util [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.385 233728 DEBUG nova.objects.instance [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lazy-loading 'pci_devices' on Instance uuid 683be290-1274-4e4b-89d9-ff49c5831ea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.920 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <uuid>683be290-1274-4e4b-89d9-ff49c5831ea1</uuid>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <name>instance-00000021</name>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:name>tempest-AttachInterfacesV270Test-server-1931673929</nova:name>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 07:59:59</nova:creationTime>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:user uuid="d26800adb7534fb6b85bcefeb114a77d">tempest-AttachInterfacesV270Test-570772750-project-member</nova:user>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:project uuid="8c3ebd49ecd442d8a0b6da0eeb15abcc">tempest-AttachInterfacesV270Test-570772750</nova:project>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <nova:port uuid="e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <entry name="serial">683be290-1274-4e4b-89d9-ff49c5831ea1</entry>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <entry name="uuid">683be290-1274-4e4b-89d9-ff49c5831ea1</entry>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/683be290-1274-4e4b-89d9-ff49c5831ea1_disk">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/683be290-1274-4e4b-89d9-ff49c5831ea1_disk.config">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b9:0f:a6"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <target dev="tape71f3fe5-be"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/console.log" append="off"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:00:00 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:00:00 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:00:00 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:00:00 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.921 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Preparing to wait for external event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.921 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.921 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.922 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.922 233728 DEBUG nova.virt.libvirt.vif [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1931673929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1931673929',id=33,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c3ebd49ecd442d8a0b6da0eeb15abcc',ramdisk_id='',reservation_id='r-otjbgvvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-570772750',owner_user_name='tempest-AttachInterfacesV270Test-570772750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:50Z,user_data=None,user_id='d26800adb7534fb6b85bcefeb114a77d',uuid=683be290-1274-4e4b-89d9-ff49c5831ea1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.922 233728 DEBUG nova.network.os_vif_util [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converting VIF {"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.923 233728 DEBUG nova.network.os_vif_util [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.923 233728 DEBUG os_vif [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.924 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.924 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.925 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.929 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape71f3fe5-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.929 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape71f3fe5-be, col_values=(('external_ids', {'iface-id': 'e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:0f:a6', 'vm-uuid': '683be290-1274-4e4b-89d9-ff49c5831ea1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539552 NetworkManager[48926]: <info>  [1764403200.9319] manager: (tape71f3fe5-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.933 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539552 nova_compute[233724]: 2025-11-29 08:00:00.942 233728 INFO os_vif [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be')#033[00m
Nov 29 03:00:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:01.028 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.043 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.043 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.043 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No VIF found with MAC fa:16:3e:b9:0f:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.044 233728 INFO nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Using config drive#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.073 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.727 233728 INFO nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Creating config drive at /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/disk.config#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.732 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_698wgz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.859 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_698wgz" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.886 233728 DEBUG nova.storage.rbd_utils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] rbd image 683be290-1274-4e4b-89d9-ff49c5831ea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:01 np0005539552 nova_compute[233724]: 2025-11-29 08:00:01.891 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/disk.config 683be290-1274-4e4b-89d9-ff49c5831ea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.073 233728 DEBUG oslo_concurrency.processutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/disk.config 683be290-1274-4e4b-89d9-ff49c5831ea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.074 233728 INFO nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Deleting local config drive /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:00:02 np0005539552 kernel: tape71f3fe5-be: entered promiscuous mode
Nov 29 03:00:02 np0005539552 NetworkManager[48926]: <info>  [1764403202.1232] manager: (tape71f3fe5-be): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 03:00:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:02Z|00118|binding|INFO|Claiming lport e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e for this chassis.
Nov 29 03:00:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:02Z|00119|binding|INFO|e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e: Claiming fa:16:3e:b9:0f:a6 10.100.0.12
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.129 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 systemd-machined[196379]: New machine qemu-11-instance-00000021.
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.175 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:0f:a6 10.100.0.12'], port_security=['fa:16:3e:b9:0f:a6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '683be290-1274-4e4b-89d9-ff49c5831ea1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-867da783-3ae2-4104-bc96-90ace82a1e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c3ebd49ecd442d8a0b6da0eeb15abcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aad831b3-0248-46ac-8671-f316fe2a724f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7be336d3-8779-4bad-93e6-16f7b986a9cb, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.176 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e in datapath 867da783-3ae2-4104-bc96-90ace82a1e72 bound to our chassis#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.176 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 867da783-3ae2-4104-bc96-90ace82a1e72#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.188 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dee75036-1e9b-4ca9-81bf-9dc2ce4de8fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.189 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap867da783-31 in ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.191 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap867da783-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.191 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[69b3e727-b120-4fa6-85c0-db5a7dad0adc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.193 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e3390537-2226-45fa-9e83-ac878fdeaa39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 systemd[1]: Started Virtual Machine qemu-11-instance-00000021.
Nov 29 03:00:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:02Z|00120|binding|INFO|Setting lport e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e ovn-installed in OVS
Nov 29 03:00:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:02Z|00121|binding|INFO|Setting lport e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e up in Southbound
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 systemd-udevd[249645]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.204 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfb0ffc-63b2-4da3-b615-55719cb59ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 NetworkManager[48926]: <info>  [1764403202.2156] device (tape71f3fe5-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:02 np0005539552 NetworkManager[48926]: <info>  [1764403202.2172] device (tape71f3fe5-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:02.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.228 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[28032541-0c3c-4acb-8808-fdbd24330878]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.257 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f2ee4a-4ea3-4d15-829f-9e53bac5f53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 systemd-udevd[249648]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.261 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b58e1a6-e036-4cd6-b26c-9611a1e2ccbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 NetworkManager[48926]: <info>  [1764403202.2629] manager: (tap867da783-30): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.289 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[244b7039-7f2e-4b64-9d06-db5ca3941da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.292 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4e8a7c-d789-4bf4-8ea1-f168efa382e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 NetworkManager[48926]: <info>  [1764403202.3167] device (tap867da783-30): carrier: link connected
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.321 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[733da19c-c6af-4509-9117-f069a8089cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.337 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[192ac589-9907-46d7-a9c1-a53e70009427]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap867da783-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:20:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624800, 'reachable_time': 22804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249676, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.354 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[067d593e-3a09-4956-9000-0aa55c90a77b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:20e8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624800, 'tstamp': 624800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249677, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.370 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea48a99-619a-4186-821e-04cdac492e39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap867da783-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:20:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624800, 'reachable_time': 22804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249678, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.401 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[74605e29-5307-46ab-a26c-0d80ac04b237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.453 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd3b63e-f506-45f2-b449-91a669709ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.454 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap867da783-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.455 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.455 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap867da783-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 kernel: tap867da783-30: entered promiscuous mode
Nov 29 03:00:02 np0005539552 NetworkManager[48926]: <info>  [1764403202.5165] manager: (tap867da783-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.518 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap867da783-30, col_values=(('external_ids', {'iface-id': '29757fac-9ce4-4d99-8370-b2a077f81bee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:02Z|00122|binding|INFO|Releasing lport 29757fac-9ce4-4d99-8370-b2a077f81bee from this chassis (sb_readonly=0)
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.532 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.533 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/867da783-3ae2-4104-bc96-90ace82a1e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/867da783-3ae2-4104-bc96-90ace82a1e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.534 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[164fc5ae-25a8-4915-a0d6-3a3eae9e149c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.535 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-867da783-3ae2-4104-bc96-90ace82a1e72
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/867da783-3ae2-4104-bc96-90ace82a1e72.pid.haproxy
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 867da783-3ae2-4104-bc96-90ace82a1e72
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:00:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:02.535 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'env', 'PROCESS_TAG=haproxy-867da783-3ae2-4104-bc96-90ace82a1e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/867da783-3ae2-4104-bc96-90ace82a1e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.708 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403202.7076879, 683be290-1274-4e4b-89d9-ff49c5831ea1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.708 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.924 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.929 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403202.7078137, 683be290-1274-4e4b-89d9-ff49c5831ea1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:02 np0005539552 nova_compute[233724]: 2025-11-29 08:00:02.929 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:02 np0005539552 podman[249753]: 2025-11-29 08:00:02.877402874 +0000 UTC m=+0.024456275 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.260 233728 DEBUG nova.network.neutron [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updated VIF entry in instance network info cache for port e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.261 233728 DEBUG nova.network.neutron [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updating instance_info_cache with network_info: [{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.410 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.413 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.535 233728 DEBUG nova.compute.manager [req-0a4420b6-6d57-4ec7-a294-82012708820d req-407570f2-ad16-475a-9bc6-a70ed5c33905 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.536 233728 DEBUG oslo_concurrency.lockutils [req-0a4420b6-6d57-4ec7-a294-82012708820d req-407570f2-ad16-475a-9bc6-a70ed5c33905 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.536 233728 DEBUG oslo_concurrency.lockutils [req-0a4420b6-6d57-4ec7-a294-82012708820d req-407570f2-ad16-475a-9bc6-a70ed5c33905 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.536 233728 DEBUG oslo_concurrency.lockutils [req-0a4420b6-6d57-4ec7-a294-82012708820d req-407570f2-ad16-475a-9bc6-a70ed5c33905 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.536 233728 DEBUG nova.compute.manager [req-0a4420b6-6d57-4ec7-a294-82012708820d req-407570f2-ad16-475a-9bc6-a70ed5c33905 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Processing event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.537 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.541 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.544 233728 INFO nova.virt.libvirt.driver [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Instance spawned successfully.#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.544 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.616 233728 DEBUG oslo_concurrency.lockutils [req-9a7321c3-852d-40de-bea3-5cb99c224e78 req-d04333e6-6fc5-4495-918b-d0b69b2ed528 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:03 np0005539552 podman[249753]: 2025-11-29 08:00:03.64823059 +0000 UTC m=+0.795283961 container create 118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.671 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.671 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403203.5398877, 683be290-1274-4e4b-89d9-ff49c5831ea1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.672 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.683 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.683 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.684 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.684 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.684 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.685 233728 DEBUG nova.virt.libvirt.driver [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:03 np0005539552 systemd[1]: Started libpod-conmon-118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad.scope.
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.730 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.733 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:03 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:00:03 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9c535a8c1329c4930f096b134ef34d72280041e706f177424501b31c4555852/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:03 np0005539552 podman[249753]: 2025-11-29 08:00:03.820236489 +0000 UTC m=+0.967289880 container init 118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:00:03 np0005539552 podman[249753]: 2025-11-29 08:00:03.826098238 +0000 UTC m=+0.973151609 container start 118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:03 np0005539552 nova_compute[233724]: 2025-11-29 08:00:03.839 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:03 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [NOTICE]   (249772) : New worker (249774) forked
Nov 29 03:00:03 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [NOTICE]   (249772) : Loading success.
Nov 29 03:00:04 np0005539552 nova_compute[233724]: 2025-11-29 08:00:04.041 233728 INFO nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Took 13.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:00:04 np0005539552 nova_compute[233724]: 2025-11-29 08:00:04.041 233728 DEBUG nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:04 np0005539552 nova_compute[233724]: 2025-11-29 08:00:04.256 233728 INFO nova.compute.manager [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Took 15.45 seconds to build instance.#033[00m
Nov 29 03:00:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:04 np0005539552 nova_compute[233724]: 2025-11-29 08:00:04.293 233728 DEBUG oslo_concurrency.lockutils [None req-34fdc660-a82d-4377-8881-5cb73bcb2473 d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:04 np0005539552 nova_compute[233724]: 2025-11-29 08:00:04.573 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.650 233728 DEBUG nova.compute.manager [req-944d95dd-22a4-4177-b2bd-971a22a0e648 req-3f49f1a1-3b57-47a6-9569-e4b9ff2cfc7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.650 233728 DEBUG oslo_concurrency.lockutils [req-944d95dd-22a4-4177-b2bd-971a22a0e648 req-3f49f1a1-3b57-47a6-9569-e4b9ff2cfc7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.651 233728 DEBUG oslo_concurrency.lockutils [req-944d95dd-22a4-4177-b2bd-971a22a0e648 req-3f49f1a1-3b57-47a6-9569-e4b9ff2cfc7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.651 233728 DEBUG oslo_concurrency.lockutils [req-944d95dd-22a4-4177-b2bd-971a22a0e648 req-3f49f1a1-3b57-47a6-9569-e4b9ff2cfc7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.652 233728 DEBUG nova.compute.manager [req-944d95dd-22a4-4177-b2bd-971a22a0e648 req-3f49f1a1-3b57-47a6-9569-e4b9ff2cfc7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] No waiting events found dispatching network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.652 233728 WARNING nova.compute.manager [req-944d95dd-22a4-4177-b2bd-971a22a0e648 req-3f49f1a1-3b57-47a6-9569-e4b9ff2cfc7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received unexpected event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:00:05 np0005539552 nova_compute[233724]: 2025-11-29 08:00:05.932 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:06.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:06 np0005539552 nova_compute[233724]: 2025-11-29 08:00:06.918 233728 DEBUG oslo_concurrency.lockutils [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "interface-683be290-1274-4e4b-89d9-ff49c5831ea1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:06 np0005539552 nova_compute[233724]: 2025-11-29 08:00:06.919 233728 DEBUG oslo_concurrency.lockutils [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "interface-683be290-1274-4e4b-89d9-ff49c5831ea1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:06 np0005539552 nova_compute[233724]: 2025-11-29 08:00:06.919 233728 DEBUG nova.objects.instance [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lazy-loading 'flavor' on Instance uuid 683be290-1274-4e4b-89d9-ff49c5831ea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:06 np0005539552 podman[249786]: 2025-11-29 08:00:06.978571193 +0000 UTC m=+0.064343348 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:00:06 np0005539552 podman[249785]: 2025-11-29 08:00:06.995154913 +0000 UTC m=+0.082908072 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:00:07 np0005539552 podman[249787]: 2025-11-29 08:00:07.010609151 +0000 UTC m=+0.087739281 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:00:07 np0005539552 nova_compute[233724]: 2025-11-29 08:00:07.271 233728 DEBUG nova.objects.instance [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lazy-loading 'pci_requests' on Instance uuid 683be290-1274-4e4b-89d9-ff49c5831ea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:07 np0005539552 nova_compute[233724]: 2025-11-29 08:00:07.317 233728 DEBUG nova.network.neutron [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:08 np0005539552 nova_compute[233724]: 2025-11-29 08:00:08.007 233728 DEBUG nova.policy [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd26800adb7534fb6b85bcefeb114a77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8c3ebd49ecd442d8a0b6da0eeb15abcc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:08.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:09 np0005539552 nova_compute[233724]: 2025-11-29 08:00:09.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539552 nova_compute[233724]: 2025-11-29 08:00:10.179 233728 DEBUG nova.network.neutron [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Successfully created port: a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:10.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 03:00:10 np0005539552 nova_compute[233724]: 2025-11-29 08:00:10.936 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:11 np0005539552 nova_compute[233724]: 2025-11-29 08:00:11.940 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539552 nova_compute[233724]: 2025-11-29 08:00:11.940 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539552 nova_compute[233724]: 2025-11-29 08:00:11.941 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539552 nova_compute[233724]: 2025-11-29 08:00:11.941 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:00:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:12.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:12 np0005539552 nova_compute[233724]: 2025-11-29 08:00:12.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:12 np0005539552 nova_compute[233724]: 2025-11-29 08:00:12.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:00:14 np0005539552 nova_compute[233724]: 2025-11-29 08:00:14.161 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:00:14 np0005539552 nova_compute[233724]: 2025-11-29 08:00:14.161 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:14 np0005539552 nova_compute[233724]: 2025-11-29 08:00:14.162 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:14.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:14 np0005539552 nova_compute[233724]: 2025-11-29 08:00:14.578 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:14 np0005539552 nova_compute[233724]: 2025-11-29 08:00:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.113 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.113 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.113 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.113 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.114 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3894114675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.547 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:15 np0005539552 nova_compute[233724]: 2025-11-29 08:00:15.939 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:16.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.336 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.336 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.340 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.340 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.344 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.344 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.537 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.538 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4246MB free_disk=20.785232543945312GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.539 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.539 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.622 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 9eb89c30-3f33-4a7c-ae19-8312a2522b82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.623 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.623 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 683be290-1274-4e4b-89d9-ff49c5831ea1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.623 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.623 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.670 233728 DEBUG nova.network.neutron [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Successfully updated port: a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.724 233728 DEBUG oslo_concurrency.lockutils [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.725 233728 DEBUG oslo_concurrency.lockutils [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquired lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.725 233728 DEBUG nova.network.neutron [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.880 233728 DEBUG nova.compute.manager [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-changed-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.880 233728 DEBUG nova.compute.manager [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Refreshing instance network info cache due to event network-changed-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.881 233728 DEBUG oslo_concurrency.lockutils [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.978 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.996 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:00:16 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.997 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:16.999 233728 WARNING nova.network.neutron [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] 867da783-3ae2-4104-bc96-90ace82a1e72 already exists in list: networks containing: ['867da783-3ae2-4104-bc96-90ace82a1e72']. ignoring it#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.015 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.043 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.138 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2538219709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.584 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.590 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.748 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:17Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:0f:a6 10.100.0.12
Nov 29 03:00:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:17Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:0f:a6 10.100.0.12
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.856 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:00:17 np0005539552 nova_compute[233724]: 2025-11-29 08:00:17.857 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 03:00:18 np0005539552 nova_compute[233724]: 2025-11-29 08:00:18.852 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:18 np0005539552 nova_compute[233724]: 2025-11-29 08:00:18.852 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:19 np0005539552 nova_compute[233724]: 2025-11-29 08:00:19.071 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:19 np0005539552 nova_compute[233724]: 2025-11-29 08:00:19.639 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.514 233728 DEBUG nova.network.neutron [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updating instance_info_cache with network_info: [{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.609 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.609 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.610 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.705 233728 DEBUG oslo_concurrency.lockutils [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Releasing lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.706 233728 DEBUG oslo_concurrency.lockutils [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.707 233728 DEBUG nova.network.neutron [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Refreshing network info cache for port a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.710 233728 DEBUG nova.virt.libvirt.vif [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1931673929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1931673929',id=33,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8c3ebd49ecd442d8a0b6da0eeb15abcc',ramdisk_id='',reservation_id='r-otjbgvvh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-570772750',owner_user_name='tempest-AttachInterfacesV270Test-570772750-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:04Z,user_data=None,user_id='d26800adb7534fb6b85bcefeb114a77d',uuid=683be290-1274-4e4b-89d9-ff49c5831ea1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.710 233728 DEBUG nova.network.os_vif_util [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converting VIF {"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.711 233728 DEBUG nova.network.os_vif_util [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.711 233728 DEBUG os_vif [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.712 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.712 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.713 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.717 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.717 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7d5ebb0-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.717 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7d5ebb0-f7, col_values=(('external_ids', {'iface-id': 'a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:0b:07', 'vm-uuid': '683be290-1274-4e4b-89d9-ff49c5831ea1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:20 np0005539552 NetworkManager[48926]: <info>  [1764403220.7667] manager: (tapa7d5ebb0-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.770 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.773 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.775 233728 INFO os_vif [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7')#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.776 233728 DEBUG nova.virt.libvirt.vif [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1931673929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1931673929',id=33,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8c3ebd49ecd442d8a0b6da0eeb15abcc',ramdisk_id='',reservation_id='r-otjbgvvh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-570772750',owner_user_name='tempest-AttachInterfacesV270Test-570772750-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:04Z,user_data=None,user_id='d26800adb7534fb6b85bcefeb114a77d',uuid=683be290-1274-4e4b-89d9-ff49c5831ea1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.776 233728 DEBUG nova.network.os_vif_util [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converting VIF {"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.777 233728 DEBUG nova.network.os_vif_util [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.780 233728 DEBUG nova.virt.libvirt.guest [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:00:20 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:57:0b:07"/>
Nov 29 03:00:20 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:00:20 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:20 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:00:20 np0005539552 nova_compute[233724]:  <target dev="tapa7d5ebb0-f7"/>
Nov 29 03:00:20 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:00:20 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:00:20 np0005539552 kernel: tapa7d5ebb0-f7: entered promiscuous mode
Nov 29 03:00:20 np0005539552 NetworkManager[48926]: <info>  [1764403220.7909] manager: (tapa7d5ebb0-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.792 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:20Z|00123|binding|INFO|Claiming lport a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 for this chassis.
Nov 29 03:00:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:20Z|00124|binding|INFO|a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115: Claiming fa:16:3e:57:0b:07 10.100.0.11
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.804 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:0b:07 10.100.0.11'], port_security=['fa:16:3e:57:0b:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '683be290-1274-4e4b-89d9-ff49c5831ea1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-867da783-3ae2-4104-bc96-90ace82a1e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c3ebd49ecd442d8a0b6da0eeb15abcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aad831b3-0248-46ac-8671-f316fe2a724f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7be336d3-8779-4bad-93e6-16f7b986a9cb, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.805 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 in datapath 867da783-3ae2-4104-bc96-90ace82a1e72 bound to our chassis#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.807 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 867da783-3ae2-4104-bc96-90ace82a1e72#033[00m
Nov 29 03:00:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:20Z|00125|binding|INFO|Setting lport a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 ovn-installed in OVS
Nov 29 03:00:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:20Z|00126|binding|INFO|Setting lport a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 up in Southbound
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.814 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.816 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 systemd-udevd[249958]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.829 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8b639614-d5db-4752-a84f-ac4a5709f4af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:20 np0005539552 NetworkManager[48926]: <info>  [1764403220.8432] device (tapa7d5ebb0-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:20 np0005539552 NetworkManager[48926]: <info>  [1764403220.8441] device (tapa7d5ebb0-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.861 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fd75f0e7-039d-4510-955a-1c49451d2948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.864 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2d33b0f2-eb4c-4f50-b0d6-6e85e9eebc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.892 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9d29131e-68ae-4d4c-ac96-00b03b322e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.912 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5e12e510-0310-477e-a49d-29ab1bf96105]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap867da783-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:20:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624800, 'reachable_time': 22804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249965, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.932 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[19bcbec6-4b26-45dd-b79d-0e982c0f9397]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap867da783-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624811, 'tstamp': 624811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249966, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap867da783-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624813, 'tstamp': 624813}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249966, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.934 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap867da783-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.935 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 nova_compute[233724]: 2025-11-29 08:00:20.936 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.936 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap867da783-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.937 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.937 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap867da783-30, col_values=(('external_ids', {'iface-id': '29757fac-9ce4-4d99-8370-b2a077f81bee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:20.938 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.033 233728 DEBUG nova.virt.libvirt.driver [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.034 233728 DEBUG nova.virt.libvirt.driver [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.034 233728 DEBUG nova.virt.libvirt.driver [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No VIF found with MAC fa:16:3e:b9:0f:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.034 233728 DEBUG nova.virt.libvirt.driver [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] No VIF found with MAC fa:16:3e:57:0b:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.098 233728 DEBUG nova.virt.libvirt.guest [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:name>tempest-AttachInterfacesV270Test-server-1931673929</nova:name>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:00:21</nova:creationTime>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:user uuid="d26800adb7534fb6b85bcefeb114a77d">tempest-AttachInterfacesV270Test-570772750-project-member</nova:user>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:project uuid="8c3ebd49ecd442d8a0b6da0eeb15abcc">tempest-AttachInterfacesV270Test-570772750</nova:project>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:port uuid="e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e">
Nov 29 03:00:21 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    <nova:port uuid="a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115">
Nov 29 03:00:21 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:00:21 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:00:21 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:00:21 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.159 233728 DEBUG oslo_concurrency.lockutils [None req-695b8643-bd27-411b-939e-d6fdfa44097e d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "interface-683be290-1274-4e4b-89d9-ff49c5831ea1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 14.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.424 233728 DEBUG nova.compute.manager [req-0f8ae6de-4b81-41b5-85da-c93427a73986 req-1bf5820a-0155-4c69-8e3e-f5e4e7b64376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.425 233728 DEBUG oslo_concurrency.lockutils [req-0f8ae6de-4b81-41b5-85da-c93427a73986 req-1bf5820a-0155-4c69-8e3e-f5e4e7b64376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.425 233728 DEBUG oslo_concurrency.lockutils [req-0f8ae6de-4b81-41b5-85da-c93427a73986 req-1bf5820a-0155-4c69-8e3e-f5e4e7b64376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.425 233728 DEBUG oslo_concurrency.lockutils [req-0f8ae6de-4b81-41b5-85da-c93427a73986 req-1bf5820a-0155-4c69-8e3e-f5e4e7b64376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.426 233728 DEBUG nova.compute.manager [req-0f8ae6de-4b81-41b5-85da-c93427a73986 req-1bf5820a-0155-4c69-8e3e-f5e4e7b64376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] No waiting events found dispatching network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:21 np0005539552 nova_compute[233724]: 2025-11-29 08:00:21.426 233728 WARNING nova.compute.manager [req-0f8ae6de-4b81-41b5-85da-c93427a73986 req-1bf5820a-0155-4c69-8e3e-f5e4e7b64376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received unexpected event network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:00:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:22.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:22 np0005539552 nova_compute[233724]: 2025-11-29 08:00:22.502 233728 DEBUG nova.network.neutron [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updated VIF entry in instance network info cache for port a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:22 np0005539552 nova_compute[233724]: 2025-11-29 08:00:22.502 233728 DEBUG nova.network.neutron [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updating instance_info_cache with network_info: [{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:22Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:0b:07 10.100.0.11
Nov 29 03:00:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:22Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:0b:07 10.100.0.11
Nov 29 03:00:22 np0005539552 nova_compute[233724]: 2025-11-29 08:00:22.889 233728 DEBUG oslo_concurrency.lockutils [req-a864bc75-79fb-4a76-b9df-042885d61a8d req-45037303-36da-4e7f-be67-3cdf8b7767af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-683be290-1274-4e4b-89d9-ff49c5831ea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.582 233728 DEBUG nova.compute.manager [req-16f16caf-e8c8-41f1-98c4-0024dba55b7e req-6b140a16-99cc-4adc-a8b5-9713adafc8d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.583 233728 DEBUG oslo_concurrency.lockutils [req-16f16caf-e8c8-41f1-98c4-0024dba55b7e req-6b140a16-99cc-4adc-a8b5-9713adafc8d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.583 233728 DEBUG oslo_concurrency.lockutils [req-16f16caf-e8c8-41f1-98c4-0024dba55b7e req-6b140a16-99cc-4adc-a8b5-9713adafc8d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.583 233728 DEBUG oslo_concurrency.lockutils [req-16f16caf-e8c8-41f1-98c4-0024dba55b7e req-6b140a16-99cc-4adc-a8b5-9713adafc8d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.583 233728 DEBUG nova.compute.manager [req-16f16caf-e8c8-41f1-98c4-0024dba55b7e req-6b140a16-99cc-4adc-a8b5-9713adafc8d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] No waiting events found dispatching network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.584 233728 WARNING nova.compute.manager [req-16f16caf-e8c8-41f1-98c4-0024dba55b7e req-6b140a16-99cc-4adc-a8b5-9713adafc8d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received unexpected event network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.955 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.956 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.956 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.956 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.957 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.958 233728 INFO nova.compute.manager [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Terminating instance#033[00m
Nov 29 03:00:23 np0005539552 nova_compute[233724]: 2025-11-29 08:00:23.959 233728 DEBUG nova.compute.manager [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:00:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:24.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:24.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:24 np0005539552 kernel: tape71f3fe5-be (unregistering): left promiscuous mode
Nov 29 03:00:24 np0005539552 NetworkManager[48926]: <info>  [1764403224.6350] device (tape71f3fe5-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.637 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:24Z|00127|binding|INFO|Releasing lport e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e from this chassis (sb_readonly=0)
Nov 29 03:00:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:24Z|00128|binding|INFO|Setting lport e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e down in Southbound
Nov 29 03:00:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:24Z|00129|binding|INFO|Removing iface tape71f3fe5-be ovn-installed in OVS
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.650 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:0f:a6 10.100.0.12'], port_security=['fa:16:3e:b9:0f:a6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '683be290-1274-4e4b-89d9-ff49c5831ea1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-867da783-3ae2-4104-bc96-90ace82a1e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c3ebd49ecd442d8a0b6da0eeb15abcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aad831b3-0248-46ac-8671-f316fe2a724f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7be336d3-8779-4bad-93e6-16f7b986a9cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.651 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e in datapath 867da783-3ae2-4104-bc96-90ace82a1e72 unbound from our chassis#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.652 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 867da783-3ae2-4104-bc96-90ace82a1e72#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.662 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.666 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4f371ba9-09fb-4274-ab85-5c8022879b64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 kernel: tapa7d5ebb0-f7 (unregistering): left promiscuous mode
Nov 29 03:00:24 np0005539552 NetworkManager[48926]: <info>  [1764403224.6718] device (tapa7d5ebb0-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:00:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:24Z|00130|binding|INFO|Releasing lport a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 from this chassis (sb_readonly=0)
Nov 29 03:00:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:24Z|00131|binding|INFO|Setting lport a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 down in Southbound
Nov 29 03:00:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:00:24Z|00132|binding|INFO|Removing iface tapa7d5ebb0-f7 ovn-installed in OVS
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.687 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.695 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7d8a00-3424-42cd-bdbd-cc57d017a92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.698 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[df398387-187a-4137-bc89-47f737d6112c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.712 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.729 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[aed80d31-4022-4c37-9010-9f5c82a11a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.745 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4d1f22-db38-4d55-9468-a07146013c12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap867da783-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:20:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624800, 'reachable_time': 22804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249989, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 29 03:00:24 np0005539552 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000021.scope: Consumed 14.305s CPU time.
Nov 29 03:00:24 np0005539552 systemd-machined[196379]: Machine qemu-11-instance-00000021 terminated.
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.765 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3fce6f8a-9788-4360-9cef-65df7786d825]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap867da783-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624811, 'tstamp': 624811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249990, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap867da783-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624813, 'tstamp': 624813}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249990, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.767 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap867da783-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.768 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 NetworkManager[48926]: <info>  [1764403224.7762] manager: (tape71f3fe5-be): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.777 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.777 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap867da783-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.777 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.778 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap867da783-30, col_values=(('external_ids', {'iface-id': '29757fac-9ce4-4d99-8370-b2a077f81bee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.778 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:24 np0005539552 NetworkManager[48926]: <info>  [1764403224.7872] manager: (tapa7d5ebb0-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.801 233728 INFO nova.virt.libvirt.driver [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Instance destroyed successfully.#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.802 233728 DEBUG nova.objects.instance [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lazy-loading 'resources' on Instance uuid 683be290-1274-4e4b-89d9-ff49c5831ea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.836 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:0b:07 10.100.0.11'], port_security=['fa:16:3e:57:0b:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '683be290-1274-4e4b-89d9-ff49c5831ea1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-867da783-3ae2-4104-bc96-90ace82a1e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c3ebd49ecd442d8a0b6da0eeb15abcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aad831b3-0248-46ac-8671-f316fe2a724f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7be336d3-8779-4bad-93e6-16f7b986a9cb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.837 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 in datapath 867da783-3ae2-4104-bc96-90ace82a1e72 unbound from our chassis#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.838 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 867da783-3ae2-4104-bc96-90ace82a1e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.840 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0abebbdc-3a3b-4cc7-8989-af5e204c78ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:24.841 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72 namespace which is not needed anymore#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.884 233728 DEBUG nova.virt.libvirt.vif [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1931673929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1931673929',id=33,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c3ebd49ecd442d8a0b6da0eeb15abcc',ramdisk_id='',reservation_id='r-otjbgvvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-570772750',owner_user_name='tempest-AttachInterfacesV270Test-570772750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:04Z,user_data=None,user_id='d26800adb7534fb6b85bcefeb114a77d',uuid=683be290-1274-4e4b-89d9-ff49c5831ea1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.885 233728 DEBUG nova.network.os_vif_util [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converting VIF {"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.885 233728 DEBUG nova.network.os_vif_util [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.886 233728 DEBUG os_vif [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.888 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.888 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape71f3fe5-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.890 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.892 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.894 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.896 233728 INFO os_vif [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:0f:a6,bridge_name='br-int',has_traffic_filtering=True,id=e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape71f3fe5-be')#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.897 233728 DEBUG nova.virt.libvirt.vif [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1931673929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1931673929',id=33,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c3ebd49ecd442d8a0b6da0eeb15abcc',ramdisk_id='',reservation_id='r-otjbgvvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-570772750',owner_user_name='tempest-AttachInterfacesV270Test-570772750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:04Z,user_data=None,user_id='d26800adb7534fb6b85bcefeb114a77d',uuid=683be290-1274-4e4b-89d9-ff49c5831ea1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.897 233728 DEBUG nova.network.os_vif_util [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converting VIF {"id": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "address": "fa:16:3e:57:0b:07", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7d5ebb0-f7", "ovs_interfaceid": "a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.898 233728 DEBUG nova.network.os_vif_util [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.898 233728 DEBUG os_vif [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.900 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.900 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7d5ebb0-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.901 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.902 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:24 np0005539552 nova_compute[233724]: 2025-11-29 08:00:24.905 233728 INFO os_vif [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:0b:07,bridge_name='br-int',has_traffic_filtering=True,id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115,network=Network(867da783-3ae2-4104-bc96-90ace82a1e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7d5ebb0-f7')#033[00m
Nov 29 03:00:25 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [NOTICE]   (249772) : haproxy version is 2.8.14-c23fe91
Nov 29 03:00:25 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [NOTICE]   (249772) : path to executable is /usr/sbin/haproxy
Nov 29 03:00:25 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [WARNING]  (249772) : Exiting Master process...
Nov 29 03:00:25 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [ALERT]    (249772) : Current worker (249774) exited with code 143 (Terminated)
Nov 29 03:00:25 np0005539552 neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72[249768]: [WARNING]  (249772) : All workers exited. Exiting... (0)
Nov 29 03:00:25 np0005539552 systemd[1]: libpod-118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad.scope: Deactivated successfully.
Nov 29 03:00:25 np0005539552 podman[250048]: 2025-11-29 08:00:25.195085097 +0000 UTC m=+0.254717446 container died 118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:00:25 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad-userdata-shm.mount: Deactivated successfully.
Nov 29 03:00:25 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f9c535a8c1329c4930f096b134ef34d72280041e706f177424501b31c4555852-merged.mount: Deactivated successfully.
Nov 29 03:00:25 np0005539552 podman[250048]: 2025-11-29 08:00:25.851819886 +0000 UTC m=+0.911452235 container cleanup 118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:00:25 np0005539552 systemd[1]: libpod-conmon-118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad.scope: Deactivated successfully.
Nov 29 03:00:25 np0005539552 podman[250081]: 2025-11-29 08:00:25.949670542 +0000 UTC m=+0.076907738 container remove 118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:00:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:25.955 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[215110ef-a51b-458f-927e-a882146a852f]: (4, ('Sat Nov 29 08:00:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72 (118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad)\n118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad\nSat Nov 29 08:00:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72 (118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad)\n118d42116227bf45d2fb9209256dfd4615e0dc2ddbaef79b8a7fb2e470b070ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:25.957 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[06fe674c-2a15-4d66-a666-18ea407c8e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:25.957 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap867da783-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:25 np0005539552 nova_compute[233724]: 2025-11-29 08:00:25.959 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:25 np0005539552 kernel: tap867da783-30: left promiscuous mode
Nov 29 03:00:25 np0005539552 nova_compute[233724]: 2025-11-29 08:00:25.975 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:25.979 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5ce9ac-d7b2-4ed2-99dd-55d5bbde0a25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:26.003 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[addffe2e-a641-422e-bf0d-9e70df5b689a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:26.004 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d66d1cbe-52c6-4161-a273-03e4e4c4df0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:26.021 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[698002aa-9dcf-49f2-a44a-f92de603b8c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624794, 'reachable_time': 39066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250097, 'error': None, 'target': 'ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:26.024 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-867da783-3ae2-4104-bc96-90ace82a1e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:00:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:26.024 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[62243c40-3974-4715-9c01-7df931d5ef4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:26 np0005539552 systemd[1]: run-netns-ovnmeta\x2d867da783\x2d3ae2\x2d4104\x2dbc96\x2d90ace82a1e72.mount: Deactivated successfully.
Nov 29 03:00:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:26.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:26 np0005539552 nova_compute[233724]: 2025-11-29 08:00:26.282 233728 DEBUG nova.compute.manager [req-477c9cdb-9b91-4056-98d3-b9f90fd37268 req-d52030f0-ae82-40aa-8ecd-0b9145cce232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-unplugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:26 np0005539552 nova_compute[233724]: 2025-11-29 08:00:26.283 233728 DEBUG oslo_concurrency.lockutils [req-477c9cdb-9b91-4056-98d3-b9f90fd37268 req-d52030f0-ae82-40aa-8ecd-0b9145cce232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:26 np0005539552 nova_compute[233724]: 2025-11-29 08:00:26.283 233728 DEBUG oslo_concurrency.lockutils [req-477c9cdb-9b91-4056-98d3-b9f90fd37268 req-d52030f0-ae82-40aa-8ecd-0b9145cce232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:26 np0005539552 nova_compute[233724]: 2025-11-29 08:00:26.283 233728 DEBUG oslo_concurrency.lockutils [req-477c9cdb-9b91-4056-98d3-b9f90fd37268 req-d52030f0-ae82-40aa-8ecd-0b9145cce232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:26 np0005539552 nova_compute[233724]: 2025-11-29 08:00:26.284 233728 DEBUG nova.compute.manager [req-477c9cdb-9b91-4056-98d3-b9f90fd37268 req-d52030f0-ae82-40aa-8ecd-0b9145cce232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] No waiting events found dispatching network-vif-unplugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:26 np0005539552 nova_compute[233724]: 2025-11-29 08:00:26.284 233728 DEBUG nova.compute.manager [req-477c9cdb-9b91-4056-98d3-b9f90fd37268 req-d52030f0-ae82-40aa-8ecd-0b9145cce232 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-unplugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:00:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:26.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:27 np0005539552 nova_compute[233724]: 2025-11-29 08:00:27.046 233728 INFO nova.virt.libvirt.driver [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Deleting instance files /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1_del#033[00m
Nov 29 03:00:27 np0005539552 nova_compute[233724]: 2025-11-29 08:00:27.047 233728 INFO nova.virt.libvirt.driver [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Deletion of /var/lib/nova/instances/683be290-1274-4e4b-89d9-ff49c5831ea1_del complete#033[00m
Nov 29 03:00:27 np0005539552 nova_compute[233724]: 2025-11-29 08:00:27.130 233728 INFO nova.compute.manager [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Took 3.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:00:27 np0005539552 nova_compute[233724]: 2025-11-29 08:00:27.131 233728 DEBUG oslo.service.loopingcall [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:00:27 np0005539552 nova_compute[233724]: 2025-11-29 08:00:27.131 233728 DEBUG nova.compute.manager [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:00:27 np0005539552 nova_compute[233724]: 2025-11-29 08:00:27.131 233728 DEBUG nova.network.neutron [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:00:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.022 233728 DEBUG nova.compute.manager [req-b019f5e0-0448-4a24-b83b-2e5b526cca07 req-53f2d5dd-6c1c-478c-af49-94fd09bf5262 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-deleted-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.023 233728 INFO nova.compute.manager [req-b019f5e0-0448-4a24-b83b-2e5b526cca07 req-53f2d5dd-6c1c-478c-af49-94fd09bf5262 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Neutron deleted interface a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.023 233728 DEBUG nova.network.neutron [req-b019f5e0-0448-4a24-b83b-2e5b526cca07 req-53f2d5dd-6c1c-478c-af49-94fd09bf5262 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updating instance_info_cache with network_info: [{"id": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "address": "fa:16:3e:b9:0f:a6", "network": {"id": "867da783-3ae2-4104-bc96-90ace82a1e72", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1796599610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c3ebd49ecd442d8a0b6da0eeb15abcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape71f3fe5-be", "ovs_interfaceid": "e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.046 233728 DEBUG nova.compute.manager [req-b019f5e0-0448-4a24-b83b-2e5b526cca07 req-53f2d5dd-6c1c-478c-af49-94fd09bf5262 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Detach interface failed, port_id=a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115, reason: Instance 683be290-1274-4e4b-89d9-ff49c5831ea1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:00:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:28.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.298 233728 DEBUG nova.network.neutron [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.316 233728 INFO nova.compute.manager [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Took 1.18 seconds to deallocate network for instance.#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.397 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.397 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.421 233728 DEBUG nova.compute.manager [req-f0917bea-4a88-4a97-9ed6-8337398b2c21 req-a4c6fa94-6e54-403f-8cd3-3b215d4488a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.422 233728 DEBUG oslo_concurrency.lockutils [req-f0917bea-4a88-4a97-9ed6-8337398b2c21 req-a4c6fa94-6e54-403f-8cd3-3b215d4488a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.422 233728 DEBUG oslo_concurrency.lockutils [req-f0917bea-4a88-4a97-9ed6-8337398b2c21 req-a4c6fa94-6e54-403f-8cd3-3b215d4488a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.422 233728 DEBUG oslo_concurrency.lockutils [req-f0917bea-4a88-4a97-9ed6-8337398b2c21 req-a4c6fa94-6e54-403f-8cd3-3b215d4488a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.422 233728 DEBUG nova.compute.manager [req-f0917bea-4a88-4a97-9ed6-8337398b2c21 req-a4c6fa94-6e54-403f-8cd3-3b215d4488a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] No waiting events found dispatching network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.422 233728 WARNING nova.compute.manager [req-f0917bea-4a88-4a97-9ed6-8337398b2c21 req-a4c6fa94-6e54-403f-8cd3-3b215d4488a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received unexpected event network-vif-plugged-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.456 233728 DEBUG nova.compute.manager [req-12bb6c69-9df8-4269-8577-5981b500b244 req-32b7fb33-c5bd-419d-bcbf-a80e767dfeff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.456 233728 DEBUG oslo_concurrency.lockutils [req-12bb6c69-9df8-4269-8577-5981b500b244 req-32b7fb33-c5bd-419d-bcbf-a80e767dfeff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.457 233728 DEBUG oslo_concurrency.lockutils [req-12bb6c69-9df8-4269-8577-5981b500b244 req-32b7fb33-c5bd-419d-bcbf-a80e767dfeff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.457 233728 DEBUG oslo_concurrency.lockutils [req-12bb6c69-9df8-4269-8577-5981b500b244 req-32b7fb33-c5bd-419d-bcbf-a80e767dfeff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.457 233728 DEBUG nova.compute.manager [req-12bb6c69-9df8-4269-8577-5981b500b244 req-32b7fb33-c5bd-419d-bcbf-a80e767dfeff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] No waiting events found dispatching network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.457 233728 WARNING nova.compute.manager [req-12bb6c69-9df8-4269-8577-5981b500b244 req-32b7fb33-c5bd-419d-bcbf-a80e767dfeff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received unexpected event network-vif-plugged-a7d5ebb0-f7c4-4f13-bb8a-eb511b99a115 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.508 233728 DEBUG oslo_concurrency.processutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/423216297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.955 233728 DEBUG oslo_concurrency.processutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.962 233728 DEBUG nova.compute.provider_tree [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:28 np0005539552 nova_compute[233724]: 2025-11-29 08:00:28.977 233728 DEBUG nova.scheduler.client.report [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:29 np0005539552 nova_compute[233724]: 2025-11-29 08:00:29.001 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:29 np0005539552 nova_compute[233724]: 2025-11-29 08:00:29.034 233728 INFO nova.scheduler.client.report [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Deleted allocations for instance 683be290-1274-4e4b-89d9-ff49c5831ea1#033[00m
Nov 29 03:00:29 np0005539552 nova_compute[233724]: 2025-11-29 08:00:29.090 233728 DEBUG oslo_concurrency.lockutils [None req-04f25d35-11da-44f9-85e4-c534e04f1a3b d26800adb7534fb6b85bcefeb114a77d 8c3ebd49ecd442d8a0b6da0eeb15abcc - - default default] Lock "683be290-1274-4e4b-89d9-ff49c5831ea1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:29 np0005539552 nova_compute[233724]: 2025-11-29 08:00:29.673 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:29 np0005539552 nova_compute[233724]: 2025-11-29 08:00:29.902 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:30.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:30 np0005539552 nova_compute[233724]: 2025-11-29 08:00:30.478 233728 DEBUG nova.compute.manager [req-9538fc28-65f1-4f2f-86df-94e9ce180a33 req-567d0f05-0d66-44c2-b59f-3edff6198e64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Received event network-vif-deleted-e71f3fe5-beca-48a1-8ae9-6b9fa9ab1c7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:32.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:34.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:34 np0005539552 nova_compute[233724]: 2025-11-29 08:00:34.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:34 np0005539552 nova_compute[233724]: 2025-11-29 08:00:34.904 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:36 np0005539552 nova_compute[233724]: 2025-11-29 08:00:36.631 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:37 np0005539552 podman[250178]: 2025-11-29 08:00:37.963488464 +0000 UTC m=+0.055556869 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:00:37 np0005539552 podman[250179]: 2025-11-29 08:00:37.996557282 +0000 UTC m=+0.085272916 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 03:00:37 np0005539552 podman[250177]: 2025-11-29 08:00:37.996630454 +0000 UTC m=+0.088531665 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:00:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2531380140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:00:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:00:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2531380140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:00:39 np0005539552 nova_compute[233724]: 2025-11-29 08:00:39.676 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:39 np0005539552 nova_compute[233724]: 2025-11-29 08:00:39.800 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403224.7990863, 683be290-1274-4e4b-89d9-ff49c5831ea1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:39 np0005539552 nova_compute[233724]: 2025-11-29 08:00:39.801 233728 INFO nova.compute.manager [-] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:39 np0005539552 nova_compute[233724]: 2025-11-29 08:00:39.822 233728 DEBUG nova.compute.manager [None req-593642df-02f3-41a1-8bf2-c72bcd3217e4 - - - - - -] [instance: 683be290-1274-4e4b-89d9-ff49c5831ea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:39 np0005539552 nova_compute[233724]: 2025-11-29 08:00:39.905 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:40.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:41 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:00:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:42.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:42.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.737 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.738 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.738 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.738 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.739 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.740 233728 INFO nova.compute.manager [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Terminating instance#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.741 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.741 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.741 233728 DEBUG nova.network.neutron [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:43 np0005539552 nova_compute[233724]: 2025-11-29 08:00:43.926 233728 DEBUG nova.network.neutron [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:00:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:44.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:44.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.361 233728 DEBUG nova.network.neutron [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.378 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.380 233728 DEBUG nova.compute.manager [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:00:44 np0005539552 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 03:00:44 np0005539552 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000001b.scope: Consumed 19.165s CPU time.
Nov 29 03:00:44 np0005539552 systemd-machined[196379]: Machine qemu-10-instance-0000001b terminated.
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.604 233728 INFO nova.virt.libvirt.driver [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance destroyed successfully.#033[00m
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.606 233728 DEBUG nova.objects.instance [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.715 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:44 np0005539552 nova_compute[233724]: 2025-11-29 08:00:44.907 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:46.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:48.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:49 np0005539552 nova_compute[233724]: 2025-11-29 08:00:49.717 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:49 np0005539552 nova_compute[233724]: 2025-11-29 08:00:49.908 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:50.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:52.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:00:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:54.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:00:54 np0005539552 nova_compute[233724]: 2025-11-29 08:00:54.718 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539552 nova_compute[233724]: 2025-11-29 08:00:54.909 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.078 233728 INFO nova.virt.libvirt.driver [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Deleting instance files /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_del#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.079 233728 INFO nova.virt.libvirt.driver [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Deletion of /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_del complete#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.185 233728 INFO nova.compute.manager [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Took 10.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.186 233728 DEBUG oslo.service.loopingcall [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.186 233728 DEBUG nova.compute.manager [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.187 233728 DEBUG nova.network.neutron [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.332 233728 DEBUG nova.network.neutron [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.356 233728 DEBUG nova.network.neutron [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.401 233728 INFO nova.compute.manager [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Took 0.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.451 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.452 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.536 233728 DEBUG oslo_concurrency.processutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1283191719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.964 233728 DEBUG oslo_concurrency.processutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:55 np0005539552 nova_compute[233724]: 2025-11-29 08:00:55.971 233728 DEBUG nova.compute.provider_tree [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:56 np0005539552 nova_compute[233724]: 2025-11-29 08:00:56.080 233728 DEBUG nova.scheduler.client.report [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:56 np0005539552 nova_compute[233724]: 2025-11-29 08:00:56.109 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:56 np0005539552 nova_compute[233724]: 2025-11-29 08:00:56.137 233728 INFO nova.scheduler.client.report [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Deleted allocations for instance fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842#033[00m
Nov 29 03:00:56 np0005539552 nova_compute[233724]: 2025-11-29 08:00:56.200 233728 DEBUG oslo_concurrency.lockutils [None req-cee34f30-b488-4aae-ad0a-8a23d96dc160 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:56.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:56.487 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:56 np0005539552 nova_compute[233724]: 2025-11-29 08:00:56.488 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:00:56.489 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:00:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.479 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.480 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.480 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.480 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.480 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.481 233728 INFO nova.compute.manager [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Terminating instance#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.482 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.482 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:57 np0005539552 nova_compute[233724]: 2025-11-29 08:00:57.482 233728 DEBUG nova.network.neutron [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:58 np0005539552 nova_compute[233724]: 2025-11-29 08:00:58.046 233728 DEBUG nova.network.neutron [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:58.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:58 np0005539552 nova_compute[233724]: 2025-11-29 08:00:58.295 233728 DEBUG nova.network.neutron [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:58 np0005539552 nova_compute[233724]: 2025-11-29 08:00:58.309 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-9eb89c30-3f33-4a7c-ae19-8312a2522b82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:58 np0005539552 nova_compute[233724]: 2025-11-29 08:00:58.311 233728 DEBUG nova.compute.manager [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:00:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:00:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:58.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:58 np0005539552 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 03:00:58 np0005539552 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000001a.scope: Consumed 21.221s CPU time.
Nov 29 03:00:58 np0005539552 systemd-machined[196379]: Machine qemu-8-instance-0000001a terminated.
Nov 29 03:00:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:58 np0005539552 nova_compute[233724]: 2025-11-29 08:00:58.734 233728 INFO nova.virt.libvirt.driver [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance destroyed successfully.#033[00m
Nov 29 03:00:58 np0005539552 nova_compute[233724]: 2025-11-29 08:00:58.736 233728 DEBUG nova.objects.instance [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid 9eb89c30-3f33-4a7c-ae19-8312a2522b82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:59 np0005539552 nova_compute[233724]: 2025-11-29 08:00:59.602 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403244.601485, fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:59 np0005539552 nova_compute[233724]: 2025-11-29 08:00:59.603 233728 INFO nova.compute.manager [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:59 np0005539552 nova_compute[233724]: 2025-11-29 08:00:59.621 233728 DEBUG nova.compute.manager [None req-0082df73-3732-481f-a745-edcb824e88d8 - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:59 np0005539552 nova_compute[233724]: 2025-11-29 08:00:59.720 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:00:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:00:59 np0005539552 nova_compute[233724]: 2025-11-29 08:00:59.910 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:00.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:02.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:02.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.011 233728 INFO nova.virt.libvirt.driver [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Deleting instance files /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82_del#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.012 233728 INFO nova.virt.libvirt.driver [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Deletion of /var/lib/nova/instances/9eb89c30-3f33-4a7c-ae19-8312a2522b82_del complete#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.099 233728 INFO nova.compute.manager [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Took 5.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.099 233728 DEBUG oslo.service.loopingcall [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.100 233728 DEBUG nova.compute.manager [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.100 233728 DEBUG nova.network.neutron [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.240 233728 DEBUG nova.network.neutron [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.253 233728 DEBUG nova.network.neutron [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.269 233728 INFO nova.compute.manager [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.327 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.328 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:04.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.394 233728 DEBUG oslo_concurrency.processutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.724 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1312967081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.885 233728 DEBUG oslo_concurrency.processutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.891 233728 DEBUG nova.compute.provider_tree [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.912 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.925 233728 DEBUG nova.scheduler.client.report [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.945 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:04 np0005539552 nova_compute[233724]: 2025-11-29 08:01:04.979 233728 INFO nova.scheduler.client.report [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Deleted allocations for instance 9eb89c30-3f33-4a7c-ae19-8312a2522b82#033[00m
Nov 29 03:01:05 np0005539552 nova_compute[233724]: 2025-11-29 08:01:05.138 233728 DEBUG oslo_concurrency.lockutils [None req-140dda59-f939-469c-82dd-5bebc3d31175 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "9eb89c30-3f33-4a7c-ae19-8312a2522b82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:05.491 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:06.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:08 np0005539552 podman[250590]: 2025-11-29 08:01:08.777785983 +0000 UTC m=+0.053837682 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:01:08 np0005539552 podman[250589]: 2025-11-29 08:01:08.787520788 +0000 UTC m=+0.069240191 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:01:08 np0005539552 podman[250591]: 2025-11-29 08:01:08.816435073 +0000 UTC m=+0.095570146 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:01:09 np0005539552 nova_compute[233724]: 2025-11-29 08:01:09.727 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:09 np0005539552 nova_compute[233724]: 2025-11-29 08:01:09.914 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:12.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:12 np0005539552 nova_compute[233724]: 2025-11-29 08:01:12.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:12 np0005539552 nova_compute[233724]: 2025-11-29 08:01:12.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:13 np0005539552 nova_compute[233724]: 2025-11-29 08:01:13.730 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403258.7281036, 9eb89c30-3f33-4a7c-ae19-8312a2522b82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:13 np0005539552 nova_compute[233724]: 2025-11-29 08:01:13.731 233728 INFO nova.compute.manager [-] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:13 np0005539552 nova_compute[233724]: 2025-11-29 08:01:13.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:13 np0005539552 nova_compute[233724]: 2025-11-29 08:01:13.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:13 np0005539552 nova_compute[233724]: 2025-11-29 08:01:13.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:01:13 np0005539552 nova_compute[233724]: 2025-11-29 08:01:13.986 233728 DEBUG nova.compute.manager [None req-2d0048e0-441a-44f9-b6e2-b37a1e63d9a2 - - - - - -] [instance: 9eb89c30-3f33-4a7c-ae19-8312a2522b82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:14.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:14.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.729 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:14Z|00133|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.917 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.941 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.941 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.961 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.962 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.962 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.962 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:01:14 np0005539552 nova_compute[233724]: 2025-11-29 08:01:14.962 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/325680423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:15 np0005539552 nova_compute[233724]: 2025-11-29 08:01:15.371 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:15 np0005539552 nova_compute[233724]: 2025-11-29 08:01:15.529 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:15 np0005539552 nova_compute[233724]: 2025-11-29 08:01:15.530 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4819MB free_disk=20.900978088378906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:01:15 np0005539552 nova_compute[233724]: 2025-11-29 08:01:15.530 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:15 np0005539552 nova_compute[233724]: 2025-11-29 08:01:15.531 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.223 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.224 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.258 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:16.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2381677876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.727 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.733 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.865 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.907 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.908 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.992 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:16 np0005539552 nova_compute[233724]: 2025-11-29 08:01:16.992 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.008 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.077 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.077 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.088 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.088 233728 INFO nova.compute.claims [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.191 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2027236804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.613 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.619 233728 DEBUG nova.compute.provider_tree [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.640 233728 DEBUG nova.scheduler.client.report [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.679 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.680 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.744 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.745 233728 DEBUG nova.network.neutron [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.768 233728 INFO nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.799 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.857 233728 INFO nova.virt.block_device [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Booting with volume cda8bfe0-3a4d-4d07-beb1-adb73b629321 at /dev/vda#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.890 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:17 np0005539552 nova_compute[233724]: 2025-11-29 08:01:17.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.154 233728 DEBUG os_brick.utils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.156 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.167 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.168 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[23b2c00f-b9f7-4139-83d8-3fb5e79fe125]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.169 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.177 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.177 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[422b1596-5b65-4288-8ed0-b864fd17d4aa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.179 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.187 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.187 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa7943d-6a9b-4aab-b275-3730596e4661]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.189 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[0e77ddbf-a62e-4a86-8b39-c56a340fa198]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.190 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.208 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.210 233728 DEBUG os_brick.initiator.connectors.lightos [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.210 233728 DEBUG os_brick.initiator.connectors.lightos [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.210 233728 DEBUG os_brick.initiator.connectors.lightos [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.211 233728 DEBUG os_brick.utils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.211 233728 DEBUG nova.virt.block_device [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updating existing volume attachment record: 0f9a3feb-8acf-4b8a-b367-d540f943835b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:01:18 np0005539552 nova_compute[233724]: 2025-11-29 08:01:18.216 233728 DEBUG nova.policy [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b37d614815064829b8372abbdbe8b3c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3c8b78b8a34400682bf8bbef740a22c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:01:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:18.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.494 233728 DEBUG nova.network.neutron [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Successfully created port: 4b3e5634-45cb-4474-88fb-530eb7736d9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.731 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.758 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.761 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.761 233728 INFO nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Creating image(s)#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.762 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.762 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Ensure instance console log exists: /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.762 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.763 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.763 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:19 np0005539552 nova_compute[233724]: 2025-11-29 08:01:19.919 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:20.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:20.609 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:20.610 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:20.610 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:20 np0005539552 nova_compute[233724]: 2025-11-29 08:01:20.968 233728 DEBUG nova.network.neutron [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Successfully updated port: 4b3e5634-45cb-4474-88fb-530eb7736d9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:01:20 np0005539552 nova_compute[233724]: 2025-11-29 08:01:20.989 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:20 np0005539552 nova_compute[233724]: 2025-11-29 08:01:20.989 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquired lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:20 np0005539552 nova_compute[233724]: 2025-11-29 08:01:20.990 233728 DEBUG nova.network.neutron [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:21 np0005539552 nova_compute[233724]: 2025-11-29 08:01:21.169 233728 DEBUG nova.compute.manager [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-changed-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:21 np0005539552 nova_compute[233724]: 2025-11-29 08:01:21.170 233728 DEBUG nova.compute.manager [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Refreshing instance network info cache due to event network-changed-4b3e5634-45cb-4474-88fb-530eb7736d9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:21 np0005539552 nova_compute[233724]: 2025-11-29 08:01:21.170 233728 DEBUG oslo_concurrency.lockutils [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:21 np0005539552 nova_compute[233724]: 2025-11-29 08:01:21.257 233728 DEBUG nova.network.neutron [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:22.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:22.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.677 233728 DEBUG nova.network.neutron [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updating instance_info_cache with network_info: [{"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.718 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Releasing lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.718 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Instance network_info: |[{"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.718 233728 DEBUG oslo_concurrency.lockutils [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.719 233728 DEBUG nova.network.neutron [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Refreshing network info cache for port 4b3e5634-45cb-4474-88fb-530eb7736d9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.721 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Start _get_guest_xml network_info=[{"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cda8bfe0-3a4d-4d07-beb1-adb73b629321', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cda8bfe0-3a4d-4d07-beb1-adb73b629321', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'beda6e02-cb4c-4daf-aefc-0081df73a5c0', 'attached_at': '', 'detached_at': '', 'volume_id': 'cda8bfe0-3a4d-4d07-beb1-adb73b629321', 'serial': 'cda8bfe0-3a4d-4d07-beb1-adb73b629321'}, 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '0f9a3feb-8acf-4b8a-b367-d540f943835b', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.726 233728 WARNING nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.731 233728 DEBUG nova.virt.libvirt.host [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.732 233728 DEBUG nova.virt.libvirt.host [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.736 233728 DEBUG nova.virt.libvirt.host [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.736 233728 DEBUG nova.virt.libvirt.host [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.738 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.738 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.738 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.739 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.739 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.739 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.739 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.740 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.740 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.740 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.740 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.741 233728 DEBUG nova.virt.hardware [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.771 233728 DEBUG nova.storage.rbd_utils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] rbd image beda6e02-cb4c-4daf-aefc-0081df73a5c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:22 np0005539552 nova_compute[233724]: 2025-11-29 08:01:22.775 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3828756534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.239 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.275 233728 DEBUG nova.virt.libvirt.vif [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-919854861',display_name='tempest-ServersTestBootFromVolume-server-919854861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-919854861',id=37,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9bzIDFkeet8Wp0S6Z3wMz5g/H/XOt4di7aMk5ffpxyk+y5jabu+Q8xzySVBiDgbCPjcQKKLjq1DPH/Mvv6OhFcIrBLVb31GEr2pJxnOoL6h1YqIw8zS2xzqdRvwqfsrg==',key_name='tempest-keypair-1334460577',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3c8b78b8a34400682bf8bbef740a22c',ramdisk_id='',reservation_id='r-a9ntnmyb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1599408907',owner_user_name='tempest-ServersTestBootFromVolume-1599408907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b37d614815064829b8372abbdbe8b3c4',uuid=beda6e02-cb4c-4daf-aefc-0081df73a5c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.276 233728 DEBUG nova.network.os_vif_util [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Converting VIF {"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.277 233728 DEBUG nova.network.os_vif_util [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.278 233728 DEBUG nova.objects.instance [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lazy-loading 'pci_devices' on Instance uuid beda6e02-cb4c-4daf-aefc-0081df73a5c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.306 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <uuid>beda6e02-cb4c-4daf-aefc-0081df73a5c0</uuid>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <name>instance-00000025</name>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersTestBootFromVolume-server-919854861</nova:name>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:01:22</nova:creationTime>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:user uuid="b37d614815064829b8372abbdbe8b3c4">tempest-ServersTestBootFromVolume-1599408907-project-member</nova:user>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:project uuid="a3c8b78b8a34400682bf8bbef740a22c">tempest-ServersTestBootFromVolume-1599408907</nova:project>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <nova:port uuid="4b3e5634-45cb-4474-88fb-530eb7736d9c">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <entry name="serial">beda6e02-cb4c-4daf-aefc-0081df73a5c0</entry>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <entry name="uuid">beda6e02-cb4c-4daf-aefc-0081df73a5c0</entry>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/beda6e02-cb4c-4daf-aefc-0081df73a5c0_disk.config">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-cda8bfe0-3a4d-4d07-beb1-adb73b629321">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <serial>cda8bfe0-3a4d-4d07-beb1-adb73b629321</serial>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:1b:44:12"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <target dev="tap4b3e5634-45"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/console.log" append="off"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:01:23 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:01:23 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:01:23 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:01:23 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.308 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Preparing to wait for external event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.308 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.309 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.309 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.310 233728 DEBUG nova.virt.libvirt.vif [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-919854861',display_name='tempest-ServersTestBootFromVolume-server-919854861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-919854861',id=37,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9bzIDFkeet8Wp0S6Z3wMz5g/H/XOt4di7aMk5ffpxyk+y5jabu+Q8xzySVBiDgbCPjcQKKLjq1DPH/Mvv6OhFcIrBLVb31GEr2pJxnOoL6h1YqIw8zS2xzqdRvwqfsrg==',key_name='tempest-keypair-1334460577',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3c8b78b8a34400682bf8bbef740a22c',ramdisk_id='',reservation_id='r-a9ntnmyb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1599408907',owner_user_name='tempest-ServersTestBootFromVolume-1599408907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b37d614815064829b8372abbdbe8b3c4',uuid=beda6e02-cb4c-4daf-aefc-0081df73a5c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.310 233728 DEBUG nova.network.os_vif_util [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Converting VIF {"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.311 233728 DEBUG nova.network.os_vif_util [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.312 233728 DEBUG os_vif [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.313 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.314 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.314 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.318 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.318 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b3e5634-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.319 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b3e5634-45, col_values=(('external_ids', {'iface-id': '4b3e5634-45cb-4474-88fb-530eb7736d9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:44:12', 'vm-uuid': 'beda6e02-cb4c-4daf-aefc-0081df73a5c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.320 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:23 np0005539552 NetworkManager[48926]: <info>  [1764403283.3219] manager: (tap4b3e5634-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.323 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.327 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.328 233728 INFO os_vif [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45')#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.448 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.449 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.449 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] No VIF found with MAC fa:16:3e:1b:44:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.450 233728 INFO nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Using config drive#033[00m
Nov 29 03:01:23 np0005539552 nova_compute[233724]: 2025-11-29 08:01:23.483 233728 DEBUG nova.storage.rbd_utils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] rbd image beda6e02-cb4c-4daf-aefc-0081df73a5c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:24 np0005539552 nova_compute[233724]: 2025-11-29 08:01:24.215 233728 INFO nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Creating config drive at /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/disk.config#033[00m
Nov 29 03:01:24 np0005539552 nova_compute[233724]: 2025-11-29 08:01:24.221 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk5gxo55_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:24 np0005539552 nova_compute[233724]: 2025-11-29 08:01:24.348 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk5gxo55_" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:24 np0005539552 nova_compute[233724]: 2025-11-29 08:01:24.380 233728 DEBUG nova.storage.rbd_utils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] rbd image beda6e02-cb4c-4daf-aefc-0081df73a5c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:24.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:24 np0005539552 nova_compute[233724]: 2025-11-29 08:01:24.386 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/disk.config beda6e02-cb4c-4daf-aefc-0081df73a5c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:24 np0005539552 nova_compute[233724]: 2025-11-29 08:01:24.733 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.096 233728 DEBUG nova.network.neutron [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updated VIF entry in instance network info cache for port 4b3e5634-45cb-4474-88fb-530eb7736d9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.097 233728 DEBUG nova.network.neutron [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updating instance_info_cache with network_info: [{"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.115 233728 DEBUG oslo_concurrency.lockutils [req-32250907-7c68-47c7-afa6-b2e45923aab0 req-422c525e-d257-4574-9e22-1b130edd4995 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.420 233728 DEBUG oslo_concurrency.processutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/disk.config beda6e02-cb4c-4daf-aefc-0081df73a5c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.421 233728 INFO nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Deleting local config drive /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:25 np0005539552 kernel: tap4b3e5634-45: entered promiscuous mode
Nov 29 03:01:25 np0005539552 NetworkManager[48926]: <info>  [1764403285.4727] manager: (tap4b3e5634-45): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 03:01:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:25Z|00134|binding|INFO|Claiming lport 4b3e5634-45cb-4474-88fb-530eb7736d9c for this chassis.
Nov 29 03:01:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:25Z|00135|binding|INFO|4b3e5634-45cb-4474-88fb-530eb7736d9c: Claiming fa:16:3e:1b:44:12 10.100.0.13
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.473 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.500 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:44:12 10.100.0.13'], port_security=['fa:16:3e:1b:44:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'beda6e02-cb4c-4daf-aefc-0081df73a5c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3c8b78b8a34400682bf8bbef740a22c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fa7c5649-d277-4350-999e-f4565487b8fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31e144bf-77d0-4f1f-92c8-c590509beb66, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=4b3e5634-45cb-4474-88fb-530eb7736d9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.501 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 4b3e5634-45cb-4474-88fb-530eb7736d9c in datapath c31fb1aa-8c23-41d0-909c-30f1f3fa717a bound to our chassis#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.502 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c31fb1aa-8c23-41d0-909c-30f1f3fa717a#033[00m
Nov 29 03:01:25 np0005539552 systemd-udevd[250895]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:25 np0005539552 systemd-machined[196379]: New machine qemu-12-instance-00000025.
Nov 29 03:01:25 np0005539552 NetworkManager[48926]: <info>  [1764403285.5160] device (tap4b3e5634-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:25 np0005539552 NetworkManager[48926]: <info>  [1764403285.5169] device (tap4b3e5634-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:25 np0005539552 systemd[1]: Started Virtual Machine qemu-12-instance-00000025.
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.516 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[87758b70-1d2a-4455-80c4-c8bf65a7640d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.519 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc31fb1aa-81 in ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.521 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc31fb1aa-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.521 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1224cf-7d2b-4c61-9a2c-0f068ed67a60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.522 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[589ab4e1-bd1b-48bf-a748-dd6fcdac0ee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.535 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[48d07150-c208-47e1-9667-778eb4cfaa07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.541 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:25Z|00136|binding|INFO|Setting lport 4b3e5634-45cb-4474-88fb-530eb7736d9c ovn-installed in OVS
Nov 29 03:01:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:25Z|00137|binding|INFO|Setting lport 4b3e5634-45cb-4474-88fb-530eb7736d9c up in Southbound
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.546 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.557 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[733f7209-4baa-4ec8-871b-1858918f88e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.589 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5f43a14f-ea7d-4b6e-aae6-2df8216312be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.594 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd1e74e-4339-45f5-982c-b2281e9216af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 NetworkManager[48926]: <info>  [1764403285.5950] manager: (tapc31fb1aa-80): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 03:01:25 np0005539552 systemd-udevd[250898]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.622 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca413e2-c3a9-46b5-8da2-8a62c2fb40d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.625 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d1103db3-52e0-4f3d-b83b-32eea16f60de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 NetworkManager[48926]: <info>  [1764403285.6490] device (tapc31fb1aa-80): carrier: link connected
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.655 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[95fd71d0-06c2-4765-859b-0a3e196aa44a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.672 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c056e42f-5c70-4f17-9a6e-463c249da03e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc31fb1aa-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:5d:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633133, 'reachable_time': 31445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250928, 'error': None, 'target': 'ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.684 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c839f8a4-0533-46c6-94ff-f9aaa5cb7e3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:5d39'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633133, 'tstamp': 633133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250929, 'error': None, 'target': 'ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.698 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1afe4919-0976-4c07-a244-b647035a5f4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc31fb1aa-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:5d:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633133, 'reachable_time': 31445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250930, 'error': None, 'target': 'ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.725 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5b9ac0-4b82-4996-a695-61c3521332ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.773 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b966e4a2-3133-4bdd-99cd-5662e20d2844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.774 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc31fb1aa-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.775 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.775 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc31fb1aa-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:25 np0005539552 NetworkManager[48926]: <info>  [1764403285.7774] manager: (tapc31fb1aa-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 03:01:25 np0005539552 kernel: tapc31fb1aa-80: entered promiscuous mode
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.776 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.779 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc31fb1aa-80, col_values=(('external_ids', {'iface-id': '60c03b7f-6a2a-4193-a139-0a2ea4a82a1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:25Z|00138|binding|INFO|Releasing lport 60c03b7f-6a2a-4193-a139-0a2ea4a82a1f from this chassis (sb_readonly=0)
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.794 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.795 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.795 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c31fb1aa-8c23-41d0-909c-30f1f3fa717a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c31fb1aa-8c23-41d0-909c-30f1f3fa717a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.796 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[92a313b6-b6c7-4512-8d7b-d51dca06d24b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.798 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-c31fb1aa-8c23-41d0-909c-30f1f3fa717a
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/c31fb1aa-8c23-41d0-909c-30f1f3fa717a.pid.haproxy
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID c31fb1aa-8c23-41d0-909c-30f1f3fa717a
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:25.798 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'env', 'PROCESS_TAG=haproxy-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c31fb1aa-8c23-41d0-909c-30f1f3fa717a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.813 233728 DEBUG nova.compute.manager [req-b15e9ef3-5347-4c78-b492-1838cf616231 req-1335965e-e443-433b-86c2-e06d6018201f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.813 233728 DEBUG oslo_concurrency.lockutils [req-b15e9ef3-5347-4c78-b492-1838cf616231 req-1335965e-e443-433b-86c2-e06d6018201f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.814 233728 DEBUG oslo_concurrency.lockutils [req-b15e9ef3-5347-4c78-b492-1838cf616231 req-1335965e-e443-433b-86c2-e06d6018201f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.814 233728 DEBUG oslo_concurrency.lockutils [req-b15e9ef3-5347-4c78-b492-1838cf616231 req-1335965e-e443-433b-86c2-e06d6018201f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:25 np0005539552 nova_compute[233724]: 2025-11-29 08:01:25.814 233728 DEBUG nova.compute.manager [req-b15e9ef3-5347-4c78-b492-1838cf616231 req-1335965e-e443-433b-86c2-e06d6018201f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Processing event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:01:26 np0005539552 podman[250962]: 2025-11-29 08:01:26.104684024 +0000 UTC m=+0.019100330 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:26.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:26 np0005539552 podman[250962]: 2025-11-29 08:01:26.804955945 +0000 UTC m=+0.719372251 container create b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:01:26 np0005539552 systemd[1]: Started libpod-conmon-b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380.scope.
Nov 29 03:01:26 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:01:26 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998f29b6b2e9d4576b3cf960dc96703a693550343d1fa159080ead4cc28f9f09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:26 np0005539552 podman[250962]: 2025-11-29 08:01:26.96315574 +0000 UTC m=+0.877572056 container init b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:26 np0005539552 podman[250962]: 2025-11-29 08:01:26.967860618 +0000 UTC m=+0.882276904 container start b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:01:26 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [NOTICE]   (251023) : New worker (251025) forked
Nov 29 03:01:26 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [NOTICE]   (251023) : Loading success.
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.023 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.024 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403287.022919, beda6e02-cb4c-4daf-aefc-0081df73a5c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.025 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.028 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.033 233728 INFO nova.virt.libvirt.driver [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Instance spawned successfully.#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.033 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.060 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.066 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.069 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.070 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.070 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.070 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.071 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.071 233728 DEBUG nova.virt.libvirt.driver [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.095 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.095 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403287.0239851, beda6e02-cb4c-4daf-aefc-0081df73a5c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.095 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.125 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.131 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403287.0274081, beda6e02-cb4c-4daf-aefc-0081df73a5c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.131 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.152 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.155 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.161 233728 INFO nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Took 7.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.162 233728 DEBUG nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.191 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.229 233728 INFO nova.compute.manager [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Took 10.17 seconds to build instance.#033[00m
Nov 29 03:01:27 np0005539552 nova_compute[233724]: 2025-11-29 08:01:27.285 233728 DEBUG oslo_concurrency.lockutils [None req-7aa1f911-b107-4ad1-b927-d9b0c4749f9a b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:28.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:28 np0005539552 nova_compute[233724]: 2025-11-29 08:01:28.321 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:28.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.720 233728 DEBUG nova.compute.manager [req-f39bae12-5010-481a-808f-87f0b765b7eb req-6c4a3abb-f6e7-487b-971b-0fd7c80bc7d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.720 233728 DEBUG oslo_concurrency.lockutils [req-f39bae12-5010-481a-808f-87f0b765b7eb req-6c4a3abb-f6e7-487b-971b-0fd7c80bc7d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.721 233728 DEBUG oslo_concurrency.lockutils [req-f39bae12-5010-481a-808f-87f0b765b7eb req-6c4a3abb-f6e7-487b-971b-0fd7c80bc7d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.721 233728 DEBUG oslo_concurrency.lockutils [req-f39bae12-5010-481a-808f-87f0b765b7eb req-6c4a3abb-f6e7-487b-971b-0fd7c80bc7d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.721 233728 DEBUG nova.compute.manager [req-f39bae12-5010-481a-808f-87f0b765b7eb req-6c4a3abb-f6e7-487b-971b-0fd7c80bc7d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] No waiting events found dispatching network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.722 233728 WARNING nova.compute.manager [req-f39bae12-5010-481a-808f-87f0b765b7eb req-6c4a3abb-f6e7-487b-971b-0fd7c80bc7d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received unexpected event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:29 np0005539552 nova_compute[233724]: 2025-11-29 08:01:29.735 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:30.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:32 np0005539552 NetworkManager[48926]: <info>  [1764403292.1390] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 03:01:32 np0005539552 NetworkManager[48926]: <info>  [1764403292.1397] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 03:01:32 np0005539552 nova_compute[233724]: 2025-11-29 08:01:32.138 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:32.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:32 np0005539552 nova_compute[233724]: 2025-11-29 08:01:32.339 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:32Z|00139|binding|INFO|Releasing lport 60c03b7f-6a2a-4193-a139-0a2ea4a82a1f from this chassis (sb_readonly=0)
Nov 29 03:01:32 np0005539552 nova_compute[233724]: 2025-11-29 08:01:32.361 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:33 np0005539552 nova_compute[233724]: 2025-11-29 08:01:33.000 233728 DEBUG nova.compute.manager [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-changed-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:33 np0005539552 nova_compute[233724]: 2025-11-29 08:01:33.001 233728 DEBUG nova.compute.manager [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Refreshing instance network info cache due to event network-changed-4b3e5634-45cb-4474-88fb-530eb7736d9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:33 np0005539552 nova_compute[233724]: 2025-11-29 08:01:33.002 233728 DEBUG oslo_concurrency.lockutils [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:33 np0005539552 nova_compute[233724]: 2025-11-29 08:01:33.003 233728 DEBUG oslo_concurrency.lockutils [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:33 np0005539552 nova_compute[233724]: 2025-11-29 08:01:33.004 233728 DEBUG nova.network.neutron [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Refreshing network info cache for port 4b3e5634-45cb-4474-88fb-530eb7736d9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:33 np0005539552 nova_compute[233724]: 2025-11-29 08:01:33.325 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:34.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539552 nova_compute[233724]: 2025-11-29 08:01:34.737 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:35.302 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:35.303 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:01:35 np0005539552 nova_compute[233724]: 2025-11-29 08:01:35.334 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539552 nova_compute[233724]: 2025-11-29 08:01:35.808 233728 DEBUG nova.network.neutron [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updated VIF entry in instance network info cache for port 4b3e5634-45cb-4474-88fb-530eb7736d9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:35 np0005539552 nova_compute[233724]: 2025-11-29 08:01:35.809 233728 DEBUG nova.network.neutron [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updating instance_info_cache with network_info: [{"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:35 np0005539552 nova_compute[233724]: 2025-11-29 08:01:35.868 233728 DEBUG oslo_concurrency.lockutils [req-3a20565e-b48b-4f57-b540-70b5dfbfeb2b req-44529ab0-c0b5-4189-ab05-dbdaaae40587 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-beda6e02-cb4c-4daf-aefc-0081df73a5c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:36.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:38.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:38 np0005539552 nova_compute[233724]: 2025-11-29 08:01:38.328 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:38 np0005539552 podman[251096]: 2025-11-29 08:01:38.978935407 +0000 UTC m=+0.063791113 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:01:39 np0005539552 podman[251097]: 2025-11-29 08:01:39.017555276 +0000 UTC m=+0.102052232 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:01:39 np0005539552 podman[251098]: 2025-11-29 08:01:39.055926107 +0000 UTC m=+0.139909489 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:01:39 np0005539552 nova_compute[233724]: 2025-11-29 08:01:39.739 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:40.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:42.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:43.305 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:43 np0005539552 nova_compute[233724]: 2025-11-29 08:01:43.332 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:43Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:44:12 10.100.0.13
Nov 29 03:01:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:43Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:44:12 10.100.0.13
Nov 29 03:01:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:44.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:44 np0005539552 nova_compute[233724]: 2025-11-29 08:01:44.741 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:46.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:46.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:48 np0005539552 nova_compute[233724]: 2025-11-29 08:01:48.334 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:49 np0005539552 nova_compute[233724]: 2025-11-29 08:01:49.743 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:50Z|00140|binding|INFO|Releasing lport 60c03b7f-6a2a-4193-a139-0a2ea4a82a1f from this chassis (sb_readonly=0)
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.266 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:50.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.509 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.510 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.511 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.511 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.511 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.512 233728 INFO nova.compute.manager [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Terminating instance#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.515 233728 DEBUG nova.compute.manager [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:50 np0005539552 kernel: tap4b3e5634-45 (unregistering): left promiscuous mode
Nov 29 03:01:50 np0005539552 NetworkManager[48926]: <info>  [1764403310.5807] device (tap4b3e5634-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.588 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:50Z|00141|binding|INFO|Releasing lport 4b3e5634-45cb-4474-88fb-530eb7736d9c from this chassis (sb_readonly=0)
Nov 29 03:01:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:50Z|00142|binding|INFO|Setting lport 4b3e5634-45cb-4474-88fb-530eb7736d9c down in Southbound
Nov 29 03:01:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:01:50Z|00143|binding|INFO|Removing iface tap4b3e5634-45 ovn-installed in OVS
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.591 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.607 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 29 03:01:50 np0005539552 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000025.scope: Consumed 14.534s CPU time.
Nov 29 03:01:50 np0005539552 systemd-machined[196379]: Machine qemu-12-instance-00000025 terminated.
Nov 29 03:01:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:50.671 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:44:12 10.100.0.13'], port_security=['fa:16:3e:1b:44:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'beda6e02-cb4c-4daf-aefc-0081df73a5c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3c8b78b8a34400682bf8bbef740a22c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fa7c5649-d277-4350-999e-f4565487b8fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31e144bf-77d0-4f1f-92c8-c590509beb66, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=4b3e5634-45cb-4474-88fb-530eb7736d9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:50.672 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 4b3e5634-45cb-4474-88fb-530eb7736d9c in datapath c31fb1aa-8c23-41d0-909c-30f1f3fa717a unbound from our chassis#033[00m
Nov 29 03:01:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:50.673 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c31fb1aa-8c23-41d0-909c-30f1f3fa717a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:50.674 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c6610a3a-50fb-43f7-ad14-f6a2e0072ba6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:50.675 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a namespace which is not needed anymore#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.733 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.737 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.747 233728 INFO nova.virt.libvirt.driver [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Instance destroyed successfully.#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.749 233728 DEBUG nova.objects.instance [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lazy-loading 'resources' on Instance uuid beda6e02-cb4c-4daf-aefc-0081df73a5c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.761 233728 DEBUG nova.virt.libvirt.vif [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-919854861',display_name='tempest-ServersTestBootFromVolume-server-919854861',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-919854861',id=37,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9bzIDFkeet8Wp0S6Z3wMz5g/H/XOt4di7aMk5ffpxyk+y5jabu+Q8xzySVBiDgbCPjcQKKLjq1DPH/Mvv6OhFcIrBLVb31GEr2pJxnOoL6h1YqIw8zS2xzqdRvwqfsrg==',key_name='tempest-keypair-1334460577',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3c8b78b8a34400682bf8bbef740a22c',ramdisk_id='',reservation_id='r-a9ntnmyb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-1599408907',owner_user_name='tempest-ServersTestBootFromVolume-1599408907-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b37d614815064829b8372abbdbe8b3c4',uuid=beda6e02-cb4c-4daf-aefc-0081df73a5c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.761 233728 DEBUG nova.network.os_vif_util [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Converting VIF {"id": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "address": "fa:16:3e:1b:44:12", "network": {"id": "c31fb1aa-8c23-41d0-909c-30f1f3fa717a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-249944984-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3c8b78b8a34400682bf8bbef740a22c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b3e5634-45", "ovs_interfaceid": "4b3e5634-45cb-4474-88fb-530eb7736d9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.762 233728 DEBUG nova.network.os_vif_util [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.762 233728 DEBUG os_vif [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.763 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.764 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b3e5634-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.765 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.766 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539552 nova_compute[233724]: 2025-11-29 08:01:50.768 233728 INFO os_vif [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:44:12,bridge_name='br-int',has_traffic_filtering=True,id=4b3e5634-45cb-4474-88fb-530eb7736d9c,network=Network(c31fb1aa-8c23-41d0-909c-30f1f3fa717a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b3e5634-45')#033[00m
Nov 29 03:01:50 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [NOTICE]   (251023) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:50 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [NOTICE]   (251023) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:50 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [WARNING]  (251023) : Exiting Master process...
Nov 29 03:01:50 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [WARNING]  (251023) : Exiting Master process...
Nov 29 03:01:50 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [ALERT]    (251023) : Current worker (251025) exited with code 143 (Terminated)
Nov 29 03:01:50 np0005539552 neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a[251014]: [WARNING]  (251023) : All workers exited. Exiting... (0)
Nov 29 03:01:50 np0005539552 systemd[1]: libpod-b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380.scope: Deactivated successfully.
Nov 29 03:01:50 np0005539552 podman[251247]: 2025-11-29 08:01:50.808658831 +0000 UTC m=+0.045083384 container died b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.042 233728 DEBUG nova.compute.manager [req-96dd1df2-2f44-48c5-a98a-5e2728eff20a req-29c1cc33-b676-47ac-87e6-349861ed8d67 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-vif-unplugged-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.043 233728 DEBUG oslo_concurrency.lockutils [req-96dd1df2-2f44-48c5-a98a-5e2728eff20a req-29c1cc33-b676-47ac-87e6-349861ed8d67 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.044 233728 DEBUG oslo_concurrency.lockutils [req-96dd1df2-2f44-48c5-a98a-5e2728eff20a req-29c1cc33-b676-47ac-87e6-349861ed8d67 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.045 233728 DEBUG oslo_concurrency.lockutils [req-96dd1df2-2f44-48c5-a98a-5e2728eff20a req-29c1cc33-b676-47ac-87e6-349861ed8d67 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.045 233728 DEBUG nova.compute.manager [req-96dd1df2-2f44-48c5-a98a-5e2728eff20a req-29c1cc33-b676-47ac-87e6-349861ed8d67 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] No waiting events found dispatching network-vif-unplugged-4b3e5634-45cb-4474-88fb-530eb7736d9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.045 233728 DEBUG nova.compute.manager [req-96dd1df2-2f44-48c5-a98a-5e2728eff20a req-29c1cc33-b676-47ac-87e6-349861ed8d67 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-vif-unplugged-4b3e5634-45cb-4474-88fb-530eb7736d9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay-998f29b6b2e9d4576b3cf960dc96703a693550343d1fa159080ead4cc28f9f09-merged.mount: Deactivated successfully.
Nov 29 03:01:51 np0005539552 podman[251247]: 2025-11-29 08:01:51.22009757 +0000 UTC m=+0.456522133 container cleanup b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:01:51 np0005539552 systemd[1]: libpod-conmon-b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380.scope: Deactivated successfully.
Nov 29 03:01:51 np0005539552 podman[251298]: 2025-11-29 08:01:51.43962791 +0000 UTC m=+0.198641224 container remove b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.451 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9e536df0-9c75-4356-b0c3-79a01257fb09]: (4, ('Sat Nov 29 08:01:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a (b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380)\nb796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380\nSat Nov 29 08:01:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a (b796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380)\nb796ab28bbabe35ba57963229e6a26b42c81c2e4f791ce4960ccd0f024d0a380\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.454 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6a74a7e4-92b4-4d3e-834d-5601bcd1974d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.456 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc31fb1aa-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.457 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:51 np0005539552 kernel: tapc31fb1aa-80: left promiscuous mode
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.472 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.473 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.475 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bd55baa4-3077-4caa-8319-177631577ecc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.498 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[893ec993-69da-4135-9a1a-f422326706a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.500 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f1742c47-1eed-4655-9d25-502a00f3e270]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.516 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[837c3c0a-9562-465b-9e18-4f4a94b97ad2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633127, 'reachable_time': 21717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251313, 'error': None, 'target': 'ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 systemd[1]: run-netns-ovnmeta\x2dc31fb1aa\x2d8c23\x2d41d0\x2d909c\x2d30f1f3fa717a.mount: Deactivated successfully.
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.520 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c31fb1aa-8c23-41d0-909c-30f1f3fa717a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:01:51.520 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9c851cdf-dac2-4703-9580-e267fc4902e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.570 233728 INFO nova.virt.libvirt.driver [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Deleting instance files /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0_del#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.571 233728 INFO nova.virt.libvirt.driver [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Deletion of /var/lib/nova/instances/beda6e02-cb4c-4daf-aefc-0081df73a5c0_del complete#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.621 233728 INFO nova.compute.manager [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.621 233728 DEBUG oslo.service.loopingcall [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.622 233728 DEBUG nova.compute.manager [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:51 np0005539552 nova_compute[233724]: 2025-11-29 08:01:51.622 233728 DEBUG nova.network.neutron [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:52.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:53 np0005539552 nova_compute[233724]: 2025-11-29 08:01:53.615 233728 DEBUG nova.compute.manager [req-890fac2b-ad23-49f0-9326-ac4dd5994c8d req-93dfb93d-ebd0-444f-a448-0f51d73b957d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:53 np0005539552 nova_compute[233724]: 2025-11-29 08:01:53.616 233728 DEBUG oslo_concurrency.lockutils [req-890fac2b-ad23-49f0-9326-ac4dd5994c8d req-93dfb93d-ebd0-444f-a448-0f51d73b957d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:53 np0005539552 nova_compute[233724]: 2025-11-29 08:01:53.616 233728 DEBUG oslo_concurrency.lockutils [req-890fac2b-ad23-49f0-9326-ac4dd5994c8d req-93dfb93d-ebd0-444f-a448-0f51d73b957d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:53 np0005539552 nova_compute[233724]: 2025-11-29 08:01:53.616 233728 DEBUG oslo_concurrency.lockutils [req-890fac2b-ad23-49f0-9326-ac4dd5994c8d req-93dfb93d-ebd0-444f-a448-0f51d73b957d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:53 np0005539552 nova_compute[233724]: 2025-11-29 08:01:53.617 233728 DEBUG nova.compute.manager [req-890fac2b-ad23-49f0-9326-ac4dd5994c8d req-93dfb93d-ebd0-444f-a448-0f51d73b957d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] No waiting events found dispatching network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:53 np0005539552 nova_compute[233724]: 2025-11-29 08:01:53.617 233728 WARNING nova.compute.manager [req-890fac2b-ad23-49f0-9326-ac4dd5994c8d req-93dfb93d-ebd0-444f-a448-0f51d73b957d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received unexpected event network-vif-plugged-4b3e5634-45cb-4474-88fb-530eb7736d9c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.032 233728 DEBUG nova.network.neutron [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.052 233728 INFO nova.compute.manager [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Took 2.43 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.119 233728 DEBUG nova.compute.manager [req-b73a2581-a932-4ab9-8c77-5d52339212b4 req-8b96c7b1-993f-4a19-abe6-c1a001afe20d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Received event network-vif-deleted-4b3e5634-45cb-4474-88fb-530eb7736d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.304 233728 INFO nova.compute.manager [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.305 233728 DEBUG nova.compute.manager [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Deleting volume: cda8bfe0-3a4d-4d07-beb1-adb73b629321 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:01:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:54.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.562 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.563 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.603 233728 DEBUG oslo_concurrency.processutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:54 np0005539552 nova_compute[233724]: 2025-11-29 08:01:54.746 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1699852700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.033 233728 DEBUG oslo_concurrency.processutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.039 233728 DEBUG nova.compute.provider_tree [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.056 233728 DEBUG nova.scheduler.client.report [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.076 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.107 233728 INFO nova.scheduler.client.report [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Deleted allocations for instance beda6e02-cb4c-4daf-aefc-0081df73a5c0#033[00m
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.178 233728 DEBUG oslo_concurrency.lockutils [None req-2f75d2a4-3df6-4b4b-9874-57dcac1fd974 b37d614815064829b8372abbdbe8b3c4 a3c8b78b8a34400682bf8bbef740a22c - - default default] Lock "beda6e02-cb4c-4daf-aefc-0081df73a5c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:01:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3679060274' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:01:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:01:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3679060274' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:01:55 np0005539552 nova_compute[233724]: 2025-11-29 08:01:55.816 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:56.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:58 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 29 03:01:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:58.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:01:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:59 np0005539552 nova_compute[233724]: 2025-11-29 08:01:59.779 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:00.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:00 np0005539552 nova_compute[233724]: 2025-11-29 08:02:00.559 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539552 nova_compute[233724]: 2025-11-29 08:02:00.726 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539552 nova_compute[233724]: 2025-11-29 08:02:00.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:02.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:04.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:04.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:04 np0005539552 nova_compute[233724]: 2025-11-29 08:02:04.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:05 np0005539552 nova_compute[233724]: 2025-11-29 08:02:05.745 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403310.7435367, beda6e02-cb4c-4daf-aefc-0081df73a5c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:05 np0005539552 nova_compute[233724]: 2025-11-29 08:02:05.746 233728 INFO nova.compute.manager [-] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:02:05 np0005539552 nova_compute[233724]: 2025-11-29 08:02:05.819 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:05 np0005539552 nova_compute[233724]: 2025-11-29 08:02:05.943 233728 DEBUG nova.compute.manager [None req-4e9b0d5a-2e60-4493-a4bc-8670bfebf5b7 - - - - - -] [instance: beda6e02-cb4c-4daf-aefc-0081df73a5c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.003000081s ======
Nov 29 03:02:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:06.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Nov 29 03:02:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:06.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:08.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:08.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:09 np0005539552 podman[251505]: 2025-11-29 08:02:09.36358843 +0000 UTC m=+0.059763533 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:02:09 np0005539552 podman[251504]: 2025-11-29 08:02:09.366475088 +0000 UTC m=+0.064097461 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:02:09 np0005539552 podman[251506]: 2025-11-29 08:02:09.429482409 +0000 UTC m=+0.123652898 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:02:09 np0005539552 nova_compute[233724]: 2025-11-29 08:02:09.783 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:10.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:10.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:10 np0005539552 nova_compute[233724]: 2025-11-29 08:02:10.822 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:12.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:13 np0005539552 nova_compute[233724]: 2025-11-29 08:02:13.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539552 nova_compute[233724]: 2025-11-29 08:02:14.783 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:14 np0005539552 nova_compute[233724]: 2025-11-29 08:02:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:14 np0005539552 nova_compute[233724]: 2025-11-29 08:02:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:14 np0005539552 nova_compute[233724]: 2025-11-29 08:02:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:15 np0005539552 nova_compute[233724]: 2025-11-29 08:02:15.882 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.338 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.338 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.338 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.338 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.339 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1304086001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.809 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.982 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.983 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4761MB free_disk=20.897136688232422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.983 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:17 np0005539552 nova_compute[233724]: 2025-11-29 08:02:17.984 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.063 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.063 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.138 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:18.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2557990190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.603 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.610 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.639 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.680 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:02:18 np0005539552 nova_compute[233724]: 2025-11-29 08:02:18.680 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:19 np0005539552 nova_compute[233724]: 2025-11-29 08:02:19.677 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:19 np0005539552 nova_compute[233724]: 2025-11-29 08:02:19.678 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:19 np0005539552 nova_compute[233724]: 2025-11-29 08:02:19.784 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:20.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:20.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:20.610 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:20.611 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:20.611 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:20 np0005539552 nova_compute[233724]: 2025-11-29 08:02:20.885 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.822 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.822 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.822 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.847 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.848 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.848 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.848 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:22 np0005539552 nova_compute[233724]: 2025-11-29 08:02:22.848 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:02:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:24.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:24.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:24 np0005539552 nova_compute[233724]: 2025-11-29 08:02:24.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:25 np0005539552 nova_compute[233724]: 2025-11-29 08:02:25.888 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:26.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315422452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:28.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:29 np0005539552 nova_compute[233724]: 2025-11-29 08:02:29.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:30.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:30.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:30 np0005539552 nova_compute[233724]: 2025-11-29 08:02:30.891 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.057 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.058 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.093 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.199 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.199 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.211 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.212 233728 INFO nova.compute.claims [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.387 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3921532180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.819 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:31 np0005539552 nova_compute[233724]: 2025-11-29 08:02:31.828 233728 DEBUG nova.compute.provider_tree [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:32.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.192 233728 DEBUG nova.scheduler.client.report [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.245 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.246 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.350 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.351 233728 DEBUG nova.network.neutron [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.380 233728 INFO nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.483 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.669 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.670 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.671 233728 INFO nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Creating image(s)#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.695 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.719 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.743 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.747 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.808 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.809 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.809 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.810 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.834 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:33 np0005539552 nova_compute[233724]: 2025-11-29 08:02:33.841 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.136 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.197 233728 DEBUG nova.network.neutron [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.198 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.202 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] resizing rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.309 233728 DEBUG nova.objects.instance [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lazy-loading 'migration_context' on Instance uuid fbdbdaa7-d282-4f25-90e1-495d08a6f4fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.328 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.329 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Ensure instance console log exists: /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.330 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.330 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.331 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.332 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.338 233728 WARNING nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.355 233728 DEBUG nova.virt.libvirt.host [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.356 233728 DEBUG nova.virt.libvirt.host [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.361 233728 DEBUG nova.virt.libvirt.host [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.362 233728 DEBUG nova.virt.libvirt.host [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.363 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.363 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.364 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.364 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.364 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.364 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.364 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.365 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.365 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.365 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.365 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.365 233728 DEBUG nova.virt.hardware [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.368 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:34.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2827716105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.804 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.828 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.832 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539552 nova_compute[233724]: 2025-11-29 08:02:34.848 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/638786983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:35 np0005539552 nova_compute[233724]: 2025-11-29 08:02:35.253 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:35 np0005539552 nova_compute[233724]: 2025-11-29 08:02:35.255 233728 DEBUG nova.objects.instance [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbdbdaa7-d282-4f25-90e1-495d08a6f4fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2601610483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:35 np0005539552 nova_compute[233724]: 2025-11-29 08:02:35.894 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:36.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.062 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <uuid>fbdbdaa7-d282-4f25-90e1-495d08a6f4fc</uuid>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <name>instance-00000028</name>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerExternalEventsTest-server-2137807765</nova:name>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:02:34</nova:creationTime>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:user uuid="3b275dc858bd4328808c27dbaf172f8c">tempest-ServerExternalEventsTest-1881856230-project-member</nova:user>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <nova:project uuid="6e169525d3b345b588b4fcf946bacf81">tempest-ServerExternalEventsTest-1881856230</nova:project>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <entry name="serial">fbdbdaa7-d282-4f25-90e1-495d08a6f4fc</entry>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <entry name="uuid">fbdbdaa7-d282-4f25-90e1-495d08a6f4fc</entry>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk.config">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/console.log" append="off"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:02:37 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:02:37 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:02:37 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:02:37 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.137 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.137 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.138 233728 INFO nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Using config drive#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.164 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:02:37Z|00144|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.592 233728 INFO nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Creating config drive at /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/disk.config#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.597 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7aoyhn2x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.724 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7aoyhn2x" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.750 233728 DEBUG nova.storage.rbd_utils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] rbd image fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.754 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/disk.config fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.899 233728 DEBUG oslo_concurrency.processutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/disk.config fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:37 np0005539552 nova_compute[233724]: 2025-11-29 08:02:37.900 233728 INFO nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Deleting local config drive /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:02:37 np0005539552 systemd-machined[196379]: New machine qemu-13-instance-00000028.
Nov 29 03:02:37 np0005539552 systemd[1]: Started Virtual Machine qemu-13-instance-00000028.
Nov 29 03:02:38 np0005539552 nova_compute[233724]: 2025-11-29 08:02:38.286 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403358.28611, fbdbdaa7-d282-4f25-90e1-495d08a6f4fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:38 np0005539552 nova_compute[233724]: 2025-11-29 08:02:38.287 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:02:38 np0005539552 nova_compute[233724]: 2025-11-29 08:02:38.290 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:02:38 np0005539552 nova_compute[233724]: 2025-11-29 08:02:38.290 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:02:38 np0005539552 nova_compute[233724]: 2025-11-29 08:02:38.294 233728 INFO nova.virt.libvirt.driver [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance spawned successfully.#033[00m
Nov 29 03:02:38 np0005539552 nova_compute[233724]: 2025-11-29 08:02:38.294 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:02:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:02:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218145961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:02:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:02:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218145961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.821 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.913 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.918 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.921 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.922 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.922 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.923 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.923 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.923 233728 DEBUG nova.virt.libvirt.driver [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.975 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.976 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403358.2892787, fbdbdaa7-d282-4f25-90e1-495d08a6f4fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:39 np0005539552 nova_compute[233724]: 2025-11-29 08:02:39.976 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:02:39 np0005539552 podman[252119]: 2025-11-29 08:02:39.979282685 +0000 UTC m=+0.065253542 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:02:39 np0005539552 podman[252120]: 2025-11-29 08:02:39.993712507 +0000 UTC m=+0.079731646 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:02:40 np0005539552 podman[252121]: 2025-11-29 08:02:40.002235158 +0000 UTC m=+0.086288393 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.043 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.047 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.086 233728 INFO nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Took 6.42 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.087 233728 DEBUG nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.088 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.155 233728 INFO nova.compute.manager [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Took 8.99 seconds to build instance.#033[00m
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.188 233728 DEBUG oslo_concurrency.lockutils [None req-210a896b-213e-4798-8c75-1d565842a7f9 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:40.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:40 np0005539552 nova_compute[233724]: 2025-11-29 08:02:40.898 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.130 233728 DEBUG nova.compute.manager [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.130 233728 DEBUG nova.compute.manager [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.131 233728 DEBUG oslo_concurrency.lockutils [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] Acquiring lock "refresh_cache-fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.131 233728 DEBUG oslo_concurrency.lockutils [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] Acquired lock "refresh_cache-fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.131 233728 DEBUG nova.network.neutron [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.390 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.390 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.391 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.391 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.392 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.393 233728 INFO nova.compute.manager [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Terminating instance#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.394 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "refresh_cache-fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.395 233728 DEBUG nova.network.neutron [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.756 233728 DEBUG nova.network.neutron [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.773 233728 DEBUG oslo_concurrency.lockutils [None req-6981fe14-ccfb-44f2-b63f-f35cd1ab3aea 4cccdc247f2342f9bbd38c1238437de1 acf5114526e34421a66a8ee35227cd24 - - default default] Releasing lock "refresh_cache-fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.774 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquired lock "refresh_cache-fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:41 np0005539552 nova_compute[233724]: 2025-11-29 08:02:41.774 233728 DEBUG nova.network.neutron [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:02:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:42 np0005539552 nova_compute[233724]: 2025-11-29 08:02:42.152 233728 DEBUG nova.network.neutron [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:42.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:42 np0005539552 nova_compute[233724]: 2025-11-29 08:02:42.615 233728 DEBUG nova.network.neutron [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:42 np0005539552 nova_compute[233724]: 2025-11-29 08:02:42.636 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Releasing lock "refresh_cache-fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:42 np0005539552 nova_compute[233724]: 2025-11-29 08:02:42.636 233728 DEBUG nova.compute.manager [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:02:42 np0005539552 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000028.scope: Deactivated successfully.
Nov 29 03:02:42 np0005539552 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000028.scope: Consumed 4.827s CPU time.
Nov 29 03:02:42 np0005539552 systemd-machined[196379]: Machine qemu-13-instance-00000028 terminated.
Nov 29 03:02:42 np0005539552 nova_compute[233724]: 2025-11-29 08:02:42.855 233728 INFO nova.virt.libvirt.driver [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance destroyed successfully.#033[00m
Nov 29 03:02:42 np0005539552 nova_compute[233724]: 2025-11-29 08:02:42.856 233728 DEBUG nova.objects.instance [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lazy-loading 'resources' on Instance uuid fbdbdaa7-d282-4f25-90e1-495d08a6f4fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.033 233728 INFO nova.virt.libvirt.driver [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Deleting instance files /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_del#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.034 233728 INFO nova.virt.libvirt.driver [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Deletion of /var/lib/nova/instances/fbdbdaa7-d282-4f25-90e1-495d08a6f4fc_del complete#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.105 233728 INFO nova.compute.manager [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.106 233728 DEBUG oslo.service.loopingcall [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.106 233728 DEBUG nova.compute.manager [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.106 233728 DEBUG nova.network.neutron [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.259 233728 DEBUG nova.network.neutron [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.284 233728 DEBUG nova.network.neutron [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.315 233728 INFO nova.compute.manager [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Took 0.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.376 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.376 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.435 233728 DEBUG oslo_concurrency.processutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.823 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1786680011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.890 233728 DEBUG oslo_concurrency.processutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:44 np0005539552 nova_compute[233724]: 2025-11-29 08:02:44.897 233728 DEBUG nova.compute.provider_tree [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:45 np0005539552 nova_compute[233724]: 2025-11-29 08:02:45.007 233728 DEBUG nova.scheduler.client.report [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:45 np0005539552 nova_compute[233724]: 2025-11-29 08:02:45.041 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:45 np0005539552 nova_compute[233724]: 2025-11-29 08:02:45.110 233728 INFO nova.scheduler.client.report [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Deleted allocations for instance fbdbdaa7-d282-4f25-90e1-495d08a6f4fc#033[00m
Nov 29 03:02:45 np0005539552 nova_compute[233724]: 2025-11-29 08:02:45.224 233728 DEBUG oslo_concurrency.lockutils [None req-2c1147c7-64c0-46b2-9c58-69a32012f5bb 3b275dc858bd4328808c27dbaf172f8c 6e169525d3b345b588b4fcf946bacf81 - - default default] Lock "fbdbdaa7-d282-4f25-90e1-495d08a6f4fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:45 np0005539552 nova_compute[233724]: 2025-11-29 08:02:45.901 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:02:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:48.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:02:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:48 np0005539552 nova_compute[233724]: 2025-11-29 08:02:48.874 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:48 np0005539552 nova_compute[233724]: 2025-11-29 08:02:48.875 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:48 np0005539552 nova_compute[233724]: 2025-11-29 08:02:48.900 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:02:48 np0005539552 nova_compute[233724]: 2025-11-29 08:02:48.995 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:48 np0005539552 nova_compute[233724]: 2025-11-29 08:02:48.996 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.004 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.004 233728 INFO nova.compute.claims [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.149 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1251435941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.597 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.602 233728 DEBUG nova.compute.provider_tree [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.634 233728 DEBUG nova.scheduler.client.report [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.658 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.659 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.718 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.719 233728 DEBUG nova.network.neutron [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.746 233728 INFO nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.766 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.824 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.898 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.899 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.900 233728 INFO nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Creating image(s)#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.920 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.941 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.960 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:49 np0005539552 nova_compute[233724]: 2025-11-29 08:02:49.963 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.016 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.017 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.018 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.018 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.038 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.042 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.310 233728 DEBUG nova.policy [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1725cd9cd6474f59b213ec05ccd5c878', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e901587fd87545d2b2c4a7872915b1fb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:02:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:50 np0005539552 nova_compute[233724]: 2025-11-29 08:02:50.904 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:51.486 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:51 np0005539552 nova_compute[233724]: 2025-11-29 08:02:51.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:51.488 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:51 np0005539552 nova_compute[233724]: 2025-11-29 08:02:51.655 233728 DEBUG nova.network.neutron [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Successfully created port: 551e9a57-22c5-4205-89e8-c32eddb64e70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:02:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.004000109s ======
Nov 29 03:02:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:52.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000109s
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.504 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.572 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] resizing rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.679 233728 DEBUG nova.objects.instance [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lazy-loading 'migration_context' on Instance uuid 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.723 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.723 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Ensure instance console log exists: /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.724 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.725 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:52 np0005539552 nova_compute[233724]: 2025-11-29 08:02:52.725 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:53.489 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.560 233728 DEBUG nova.network.neutron [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Successfully updated port: 551e9a57-22c5-4205-89e8-c32eddb64e70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.598 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.599 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquired lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.599 233728 DEBUG nova.network.neutron [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.740 233728 DEBUG nova.compute.manager [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-changed-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.740 233728 DEBUG nova.compute.manager [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Refreshing instance network info cache due to event network-changed-551e9a57-22c5-4205-89e8-c32eddb64e70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.740 233728 DEBUG oslo_concurrency.lockutils [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:53 np0005539552 nova_compute[233724]: 2025-11-29 08:02:53.864 233728 DEBUG nova.network.neutron [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:54.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:54 np0005539552 nova_compute[233724]: 2025-11-29 08:02:54.825 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.884299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374884355, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2448, "num_deletes": 254, "total_data_size": 5785169, "memory_usage": 5856896, "flush_reason": "Manual Compaction"}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374912594, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3793794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30141, "largest_seqno": 32584, "table_properties": {"data_size": 3783890, "index_size": 6270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21221, "raw_average_key_size": 20, "raw_value_size": 3763811, "raw_average_value_size": 3679, "num_data_blocks": 273, "num_entries": 1023, "num_filter_entries": 1023, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403156, "oldest_key_time": 1764403156, "file_creation_time": 1764403374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 28370 microseconds, and 7034 cpu microseconds.
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.912670) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3793794 bytes OK
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.912691) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.914703) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.914715) EVENT_LOG_v1 {"time_micros": 1764403374914710, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.914729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5774295, prev total WAL file size 5774295, number of live WAL files 2.
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.915815) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3704KB)], [60(9063KB)]
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374915844, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 13074459, "oldest_snapshot_seqno": -1}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6286 keys, 11162391 bytes, temperature: kUnknown
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374994581, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 11162391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11119631, "index_size": 25946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 161404, "raw_average_key_size": 25, "raw_value_size": 11005760, "raw_average_value_size": 1750, "num_data_blocks": 1043, "num_entries": 6286, "num_filter_entries": 6286, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.994845) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11162391 bytes
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.996264) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6810, records dropped: 524 output_compression: NoCompression
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.996279) EVENT_LOG_v1 {"time_micros": 1764403374996272, "job": 36, "event": "compaction_finished", "compaction_time_micros": 78863, "compaction_time_cpu_micros": 23107, "output_level": 6, "num_output_files": 1, "total_output_size": 11162391, "num_input_records": 6810, "num_output_records": 6286, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374997113, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374998743, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.915769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.998839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.998843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.998845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.998847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:02:54.998849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539552 nova_compute[233724]: 2025-11-29 08:02:55.906 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:56.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.242 233728 DEBUG nova.network.neutron [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updating instance_info_cache with network_info: [{"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.264 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Releasing lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.264 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Instance network_info: |[{"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.265 233728 DEBUG oslo_concurrency.lockutils [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.265 233728 DEBUG nova.network.neutron [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Refreshing network info cache for port 551e9a57-22c5-4205-89e8-c32eddb64e70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.267 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Start _get_guest_xml network_info=[{"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.271 233728 WARNING nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.279 233728 DEBUG nova.virt.libvirt.host [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.280 233728 DEBUG nova.virt.libvirt.host [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.283 233728 DEBUG nova.virt.libvirt.host [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.284 233728 DEBUG nova.virt.libvirt.host [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.285 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.285 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.285 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.285 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.285 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.286 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.286 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.286 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.286 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.286 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.287 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.287 233728 DEBUG nova.virt.hardware [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.289 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1297471131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.708 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.742 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.746 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.854 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403362.852855, fbdbdaa7-d282-4f25-90e1-495d08a6f4fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.855 233728 INFO nova.compute.manager [-] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:02:57 np0005539552 nova_compute[233724]: 2025-11-29 08:02:57.896 233728 DEBUG nova.compute.manager [None req-e17b37b8-b040-40b7-a68a-ffab0bae3ee6 - - - - - -] [instance: fbdbdaa7-d282-4f25-90e1-495d08a6f4fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/965467551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.165 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.166 233728 DEBUG nova.virt.libvirt.vif [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=41,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXfnXUQg/8SVHHAlptINDtoKsSKar65f7U2lB7/tTeS26DoL2HfSP0qwoOh8GzjPvaglCUIpTVWlvLX5KACLBdq1hIM3lqVjFvjnBEMKVYrZ6XLLSl5yo+4Avh/c3dNTQ==',key_name='tempest-keypair-298047404',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e901587fd87545d2b2c4a7872915b1fb',ramdisk_id='',reservation_id='r-inj3yv65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-36746948',owner_user_name='tempest-ServersV294TestFqdnHostnames-36746948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1725cd9cd6474f59b213ec05ccd5c878',uuid=81e12fcb-7c4c-4214-b8c5-1a18612dcfe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.166 233728 DEBUG nova.network.os_vif_util [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Converting VIF {"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.167 233728 DEBUG nova.network.os_vif_util [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.170 233728 DEBUG nova.objects.instance [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.189 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <uuid>81e12fcb-7c4c-4214-b8c5-1a18612dcfe1</uuid>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <name>instance-00000029</name>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:name>guest-instance-1</nova:name>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:02:57</nova:creationTime>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:user uuid="1725cd9cd6474f59b213ec05ccd5c878">tempest-ServersV294TestFqdnHostnames-36746948-project-member</nova:user>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:project uuid="e901587fd87545d2b2c4a7872915b1fb">tempest-ServersV294TestFqdnHostnames-36746948</nova:project>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <nova:port uuid="551e9a57-22c5-4205-89e8-c32eddb64e70">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <entry name="serial">81e12fcb-7c4c-4214-b8c5-1a18612dcfe1</entry>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <entry name="uuid">81e12fcb-7c4c-4214-b8c5-1a18612dcfe1</entry>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk.config">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:58:6f:67"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <target dev="tap551e9a57-22"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/console.log" append="off"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:02:58 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:02:58 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:02:58 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:02:58 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.190 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Preparing to wait for external event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.190 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.191 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.191 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.191 233728 DEBUG nova.virt.libvirt.vif [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=41,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXfnXUQg/8SVHHAlptINDtoKsSKar65f7U2lB7/tTeS26DoL2HfSP0qwoOh8GzjPvaglCUIpTVWlvLX5KACLBdq1hIM3lqVjFvjnBEMKVYrZ6XLLSl5yo+4Avh/c3dNTQ==',key_name='tempest-keypair-298047404',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e901587fd87545d2b2c4a7872915b1fb',ramdisk_id='',reservation_id='r-inj3yv65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-36746948',owner_user_name='tempest-ServersV294TestFqdnHostnames-36746948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1725cd9cd6474f59b213ec05ccd5c878',uuid=81e12fcb-7c4c-4214-b8c5-1a18612dcfe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.192 233728 DEBUG nova.network.os_vif_util [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Converting VIF {"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.192 233728 DEBUG nova.network.os_vif_util [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.193 233728 DEBUG os_vif [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.194 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.194 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.197 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.197 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap551e9a57-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.198 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap551e9a57-22, col_values=(('external_ids', {'iface-id': '551e9a57-22c5-4205-89e8-c32eddb64e70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:6f:67', 'vm-uuid': '81e12fcb-7c4c-4214-b8c5-1a18612dcfe1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.199 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539552 NetworkManager[48926]: <info>  [1764403378.2005] manager: (tap551e9a57-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.205 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.206 233728 INFO os_vif [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22')#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.269 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.270 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.270 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] No VIF found with MAC fa:16:3e:58:6f:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.270 233728 INFO nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Using config drive#033[00m
Nov 29 03:02:58 np0005539552 nova_compute[233724]: 2025-11-29 08:02:58.295 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:58.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:02:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.306 233728 INFO nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Creating config drive at /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/disk.config#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.313 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4iiwrkrh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.444 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4iiwrkrh" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.472 233728 DEBUG nova.storage.rbd_utils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] rbd image 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.476 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/disk.config 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.618 233728 DEBUG oslo_concurrency.processutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/disk.config 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.619 233728 INFO nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Deleting local config drive /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:02:59 np0005539552 kernel: tap551e9a57-22: entered promiscuous mode
Nov 29 03:02:59 np0005539552 NetworkManager[48926]: <info>  [1764403379.6664] manager: (tap551e9a57-22): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.667 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:02:59Z|00145|binding|INFO|Claiming lport 551e9a57-22c5-4205-89e8-c32eddb64e70 for this chassis.
Nov 29 03:02:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:02:59Z|00146|binding|INFO|551e9a57-22c5-4205-89e8-c32eddb64e70: Claiming fa:16:3e:58:6f:67 10.100.0.3
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.672 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.681 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:6f:67 10.100.0.3'], port_security=['fa:16:3e:58:6f:67 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '81e12fcb-7c4c-4214-b8c5-1a18612dcfe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e901587fd87545d2b2c4a7872915b1fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eedb5055-9021-42eb-9b17-92a00f71b776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d31406d9-2e8a-4eee-976d-920afbb6ae70, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=551e9a57-22c5-4205-89e8-c32eddb64e70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.682 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 551e9a57-22c5-4205-89e8-c32eddb64e70 in datapath acc66d85-ef8e-407d-9c47-b9f74fa426ba bound to our chassis#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.683 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network acc66d85-ef8e-407d-9c47-b9f74fa426ba#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.696 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d544be-d553-4ac6-9124-232df987bae6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.697 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapacc66d85-e1 in ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:02:59 np0005539552 systemd-udevd[252614]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.699 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapacc66d85-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.699 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[04a44538-16f3-4aec-b708-31674ec8ce73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.700 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c65dc5-12a9-4170-929a-2ac635e62266]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 systemd-machined[196379]: New machine qemu-14-instance-00000029.
Nov 29 03:02:59 np0005539552 NetworkManager[48926]: <info>  [1764403379.7185] device (tap551e9a57-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:02:59 np0005539552 NetworkManager[48926]: <info>  [1764403379.7195] device (tap551e9a57-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.717 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9b91a312-dae7-4d9d-b025-adc5866bcef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 systemd[1]: Started Virtual Machine qemu-14-instance-00000029.
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.740 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.744 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9f00ae46-30e9-4ea7-8dc8-32c5672a1535]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:02:59Z|00147|binding|INFO|Setting lport 551e9a57-22c5-4205-89e8-c32eddb64e70 ovn-installed in OVS
Nov 29 03:02:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:02:59Z|00148|binding|INFO|Setting lport 551e9a57-22c5-4205-89e8-c32eddb64e70 up in Southbound
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.756 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.772 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6c56eb2a-64dc-43f8-bdbd-1718d26d4120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.778 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f30b62-39ad-47d0-95c7-6ffc1c989e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 NetworkManager[48926]: <info>  [1764403379.7790] manager: (tapacc66d85-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.809 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[122bae0a-9705-4833-9c07-8aca008f170d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.812 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9517e8-4837-46ae-9442-a4f77bde8107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.826 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 NetworkManager[48926]: <info>  [1764403379.8334] device (tapacc66d85-e0): carrier: link connected
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.839 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d719137f-7024-409c-a6be-e79cfda82666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.855 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41181ee2-1a12-401f-b03e-b33ae524199c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacc66d85-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:e5:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642552, 'reachable_time': 24773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252647, 'error': None, 'target': 'ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.870 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3b05aa-1b6b-48c0-95fe-3dcb9c9daa84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:e5a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642552, 'tstamp': 642552}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252648, 'error': None, 'target': 'ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.885 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[83803a35-9a9c-4642-a502-e81482e2b6a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapacc66d85-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:e5:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642552, 'reachable_time': 24773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252649, 'error': None, 'target': 'ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.912 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[70f0fc5d-87ad-48f9-a34b-07810dc6df08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.959 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e8054df4-0cc6-46cb-a8de-43353754e379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.960 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc66d85-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.960 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.961 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacc66d85-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.962 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 NetworkManager[48926]: <info>  [1764403379.9645] manager: (tapacc66d85-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 03:02:59 np0005539552 kernel: tapacc66d85-e0: entered promiscuous mode
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.967 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapacc66d85-e0, col_values=(('external_ids', {'iface-id': 'b5a0cd16-22f2-4e64-a74e-3263bc521416'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.968 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:02:59Z|00149|binding|INFO|Releasing lport b5a0cd16-22f2-4e64-a74e-3263bc521416 from this chassis (sb_readonly=0)
Nov 29 03:02:59 np0005539552 nova_compute[233724]: 2025-11-29 08:02:59.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.983 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/acc66d85-ef8e-407d-9c47-b9f74fa426ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/acc66d85-ef8e-407d-9c47-b9f74fa426ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.984 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e285bb03-d0b4-4170-b567-9429ae036600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.984 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-acc66d85-ef8e-407d-9c47-b9f74fa426ba
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/acc66d85-ef8e-407d-9c47-b9f74fa426ba.pid.haproxy
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID acc66d85-ef8e-407d-9c47-b9f74fa426ba
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:02:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:02:59.985 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'env', 'PROCESS_TAG=haproxy-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/acc66d85-ef8e-407d-9c47-b9f74fa426ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:03:00 np0005539552 podman[252681]: 2025-11-29 08:03:00.379824512 +0000 UTC m=+0.099753319 container create dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:03:00 np0005539552 podman[252681]: 2025-11-29 08:03:00.300485248 +0000 UTC m=+0.020414085 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:03:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:00.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:00 np0005539552 systemd[1]: Started libpod-conmon-dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15.scope.
Nov 29 03:03:00 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:03:00 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3484d8e45027f3b6c392061be821270d8d64e91a2c4fd493bca355de2d4d22d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:00 np0005539552 podman[252681]: 2025-11-29 08:03:00.466986269 +0000 UTC m=+0.186915106 container init dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:03:00 np0005539552 podman[252681]: 2025-11-29 08:03:00.472264942 +0000 UTC m=+0.192193759 container start dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:03:00 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [NOTICE]   (252700) : New worker (252702) forked
Nov 29 03:03:00 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [NOTICE]   (252700) : Loading success.
Nov 29 03:03:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:00.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:00 np0005539552 nova_compute[233724]: 2025-11-29 08:03:00.919 233728 DEBUG nova.compute.manager [req-41499674-dd7d-42ee-8062-74d200ddd80e req-e44e0c83-eb07-4b83-a89c-76455dd9b766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:00 np0005539552 nova_compute[233724]: 2025-11-29 08:03:00.920 233728 DEBUG oslo_concurrency.lockutils [req-41499674-dd7d-42ee-8062-74d200ddd80e req-e44e0c83-eb07-4b83-a89c-76455dd9b766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:00 np0005539552 nova_compute[233724]: 2025-11-29 08:03:00.921 233728 DEBUG oslo_concurrency.lockutils [req-41499674-dd7d-42ee-8062-74d200ddd80e req-e44e0c83-eb07-4b83-a89c-76455dd9b766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:00 np0005539552 nova_compute[233724]: 2025-11-29 08:03:00.921 233728 DEBUG oslo_concurrency.lockutils [req-41499674-dd7d-42ee-8062-74d200ddd80e req-e44e0c83-eb07-4b83-a89c-76455dd9b766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:00 np0005539552 nova_compute[233724]: 2025-11-29 08:03:00.921 233728 DEBUG nova.compute.manager [req-41499674-dd7d-42ee-8062-74d200ddd80e req-e44e0c83-eb07-4b83-a89c-76455dd9b766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Processing event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.218 233728 DEBUG nova.network.neutron [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updated VIF entry in instance network info cache for port 551e9a57-22c5-4205-89e8-c32eddb64e70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.219 233728 DEBUG nova.network.neutron [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updating instance_info_cache with network_info: [{"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.247 233728 DEBUG oslo_concurrency.lockutils [req-f81d938d-962c-40ae-94a2-fef358267813 req-34ddda6b-1ee5-4b05-b247-809fcdf9d480 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.516 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403381.5160744, 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.517 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.521 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.526 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.530 233728 INFO nova.virt.libvirt.driver [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Instance spawned successfully.#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.531 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.547 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.554 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.557 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.558 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.558 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.558 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.559 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.559 233728 DEBUG nova.virt.libvirt.driver [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.586 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.587 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403381.5166025, 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.587 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.619 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.623 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403381.5250862, 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.623 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.640 233728 INFO nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Took 11.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.640 233728 DEBUG nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.647 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.649 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.716 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.824 233728 INFO nova.compute.manager [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Took 12.87 seconds to build instance.#033[00m
Nov 29 03:03:01 np0005539552 nova_compute[233724]: 2025-11-29 08:03:01.909 233728 DEBUG oslo_concurrency.lockutils [None req-d8b8d8cf-f56c-4097-b9a1-6fc624643b3e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.084 233728 DEBUG nova.compute.manager [req-099dcd18-c65f-4073-980a-4784cb7815aa req-fe1a4107-8dca-4eac-ac10-95627834320d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.085 233728 DEBUG oslo_concurrency.lockutils [req-099dcd18-c65f-4073-980a-4784cb7815aa req-fe1a4107-8dca-4eac-ac10-95627834320d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.085 233728 DEBUG oslo_concurrency.lockutils [req-099dcd18-c65f-4073-980a-4784cb7815aa req-fe1a4107-8dca-4eac-ac10-95627834320d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.085 233728 DEBUG oslo_concurrency.lockutils [req-099dcd18-c65f-4073-980a-4784cb7815aa req-fe1a4107-8dca-4eac-ac10-95627834320d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.085 233728 DEBUG nova.compute.manager [req-099dcd18-c65f-4073-980a-4784cb7815aa req-fe1a4107-8dca-4eac-ac10-95627834320d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] No waiting events found dispatching network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.086 233728 WARNING nova.compute.manager [req-099dcd18-c65f-4073-980a-4784cb7815aa req-fe1a4107-8dca-4eac-ac10-95627834320d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received unexpected event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:03 np0005539552 nova_compute[233724]: 2025-11-29 08:03:03.235 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:04.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:04 np0005539552 NetworkManager[48926]: <info>  [1764403384.7965] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 03:03:04 np0005539552 NetworkManager[48926]: <info>  [1764403384.7974] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 03:03:04 np0005539552 nova_compute[233724]: 2025-11-29 08:03:04.798 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:04 np0005539552 nova_compute[233724]: 2025-11-29 08:03:04.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:04 np0005539552 nova_compute[233724]: 2025-11-29 08:03:04.945 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:04Z|00150|binding|INFO|Releasing lport b5a0cd16-22f2-4e64-a74e-3263bc521416 from this chassis (sb_readonly=0)
Nov 29 03:03:04 np0005539552 nova_compute[233724]: 2025-11-29 08:03:04.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:05 np0005539552 nova_compute[233724]: 2025-11-29 08:03:05.557 233728 DEBUG nova.compute.manager [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-changed-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:05 np0005539552 nova_compute[233724]: 2025-11-29 08:03:05.559 233728 DEBUG nova.compute.manager [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Refreshing instance network info cache due to event network-changed-551e9a57-22c5-4205-89e8-c32eddb64e70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:05 np0005539552 nova_compute[233724]: 2025-11-29 08:03:05.559 233728 DEBUG oslo_concurrency.lockutils [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:05 np0005539552 nova_compute[233724]: 2025-11-29 08:03:05.559 233728 DEBUG oslo_concurrency.lockutils [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:05 np0005539552 nova_compute[233724]: 2025-11-29 08:03:05.560 233728 DEBUG nova.network.neutron [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Refreshing network info cache for port 551e9a57-22c5-4205-89e8-c32eddb64e70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:03:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:06.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:07 np0005539552 nova_compute[233724]: 2025-11-29 08:03:07.789 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:07 np0005539552 nova_compute[233724]: 2025-11-29 08:03:07.861 233728 DEBUG nova.network.neutron [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updated VIF entry in instance network info cache for port 551e9a57-22c5-4205-89e8-c32eddb64e70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:07 np0005539552 nova_compute[233724]: 2025-11-29 08:03:07.862 233728 DEBUG nova.network.neutron [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updating instance_info_cache with network_info: [{"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:07 np0005539552 nova_compute[233724]: 2025-11-29 08:03:07.901 233728 DEBUG oslo_concurrency.lockutils [req-6a6d5192-f511-4d87-a378-8a740ff2e6d8 req-2eb9a223-fa4f-4b2d-8193-50531a490132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:08 np0005539552 nova_compute[233724]: 2025-11-29 08:03:08.237 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:08.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:09 np0005539552 nova_compute[233724]: 2025-11-29 08:03:09.948 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:10.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:03:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/31704962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:03:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:03:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/31704962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:03:10 np0005539552 podman[252811]: 2025-11-29 08:03:10.965603496 +0000 UTC m=+0.053088192 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:03:10 np0005539552 podman[252810]: 2025-11-29 08:03:10.998375576 +0000 UTC m=+0.086470939 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Nov 29 03:03:11 np0005539552 podman[252812]: 2025-11-29 08:03:11.044982701 +0000 UTC m=+0.125104727 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:03:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:12.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:13 np0005539552 nova_compute[233724]: 2025-11-29 08:03:13.296 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:13Z|00151|binding|INFO|Releasing lport b5a0cd16-22f2-4e64-a74e-3263bc521416 from this chassis (sb_readonly=0)
Nov 29 03:03:13 np0005539552 nova_compute[233724]: 2025-11-29 08:03:13.375 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:14.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:14Z|00152|binding|INFO|Releasing lport b5a0cd16-22f2-4e64-a74e-3263bc521416 from this chassis (sb_readonly=0)
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.485 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:14.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.948 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.953 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.954 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:03:14 np0005539552 nova_compute[233724]: 2025-11-29 08:03:14.954 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1612566880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.402 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.470 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.471 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.645 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.647 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4607MB free_disk=20.946460723876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.647 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.647 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.745 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.746 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.746 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:03:15 np0005539552 nova_compute[233724]: 2025-11-29 08:03:15.801 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1959316483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:16 np0005539552 nova_compute[233724]: 2025-11-29 08:03:16.269 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:16 np0005539552 nova_compute[233724]: 2025-11-29 08:03:16.276 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:16 np0005539552 nova_compute[233724]: 2025-11-29 08:03:16.295 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:16 np0005539552 nova_compute[233724]: 2025-11-29 08:03:16.326 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:03:16 np0005539552 nova_compute[233724]: 2025-11-29 08:03:16.327 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:16 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:17 np0005539552 nova_compute[233724]: 2025-11-29 08:03:17.327 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539552 nova_compute[233724]: 2025-11-29 08:03:17.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539552 nova_compute[233724]: 2025-11-29 08:03:17.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539552 nova_compute[233724]: 2025-11-29 08:03:17.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539552 nova_compute[233724]: 2025-11-29 08:03:17.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:03:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:18Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:6f:67 10.100.0.3
Nov 29 03:03:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:18Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:6f:67 10.100.0.3
Nov 29 03:03:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:18Z|00153|binding|INFO|Releasing lport b5a0cd16-22f2-4e64-a74e-3263bc521416 from this chassis (sb_readonly=0)
Nov 29 03:03:18 np0005539552 nova_compute[233724]: 2025-11-29 08:03:18.299 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:18 np0005539552 nova_compute[233724]: 2025-11-29 08:03:18.328 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:18.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:18.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:18 np0005539552 nova_compute[233724]: 2025-11-29 08:03:18.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:18 np0005539552 nova_compute[233724]: 2025-11-29 08:03:18.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:03:18 np0005539552 nova_compute[233724]: 2025-11-29 08:03:18.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:03:19 np0005539552 nova_compute[233724]: 2025-11-29 08:03:19.146 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:19 np0005539552 nova_compute[233724]: 2025-11-29 08:03:19.146 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:19 np0005539552 nova_compute[233724]: 2025-11-29 08:03:19.146 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:03:19 np0005539552 nova_compute[233724]: 2025-11-29 08:03:19.147 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:19 np0005539552 nova_compute[233724]: 2025-11-29 08:03:19.950 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 03:03:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:20.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:20.611 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:20.612 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:20.612 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:20 np0005539552 nova_compute[233724]: 2025-11-29 08:03:20.829 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updating instance_info_cache with network_info: [{"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:20 np0005539552 nova_compute[233724]: 2025-11-29 08:03:20.848 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:20 np0005539552 nova_compute[233724]: 2025-11-29 08:03:20.849 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:03:21 np0005539552 nova_compute[233724]: 2025-11-29 08:03:21.843 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:03:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:03:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:22.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:23.219 143505 DEBUG eventlet.wsgi.server [-] (143505) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:23.221 143505 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: Accept: */*#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: Connection: close#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: Content-Type: text/plain#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: Host: 169.254.169.254#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: User-Agent: curl/7.84.0#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: X-Forwarded-For: 10.100.0.3#015
Nov 29 03:03:23 np0005539552 ovn_metadata_agent[143394]: X-Ovn-Network-Id: acc66d85-ef8e-407d-9c47-b9f74fa426ba __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:03:23 np0005539552 nova_compute[233724]: 2025-11-29 08:03:23.302 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:24.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.504 143505 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.505 143505 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1671 time: 1.2845311#033[00m
Nov 29 03:03:24 np0005539552 haproxy-metadata-proxy-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252702]: 10.100.0.3:40272 [29/Nov/2025:08:03:23.218] listener listener/metadata 0/0/0/1287/1287 200 1655 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 03:03:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:24.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.721 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.721 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.722 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.722 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.722 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.723 233728 INFO nova.compute.manager [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Terminating instance#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.724 233728 DEBUG nova.compute.manager [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:03:24 np0005539552 kernel: tap551e9a57-22 (unregistering): left promiscuous mode
Nov 29 03:03:24 np0005539552 NetworkManager[48926]: <info>  [1764403404.8799] device (tap551e9a57-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:03:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:24Z|00154|binding|INFO|Releasing lport 551e9a57-22c5-4205-89e8-c32eddb64e70 from this chassis (sb_readonly=0)
Nov 29 03:03:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:24Z|00155|binding|INFO|Setting lport 551e9a57-22c5-4205-89e8-c32eddb64e70 down in Southbound
Nov 29 03:03:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:03:24Z|00156|binding|INFO|Removing iface tap551e9a57-22 ovn-installed in OVS
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.888 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.891 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.895 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:6f:67 10.100.0.3'], port_security=['fa:16:3e:58:6f:67 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '81e12fcb-7c4c-4214-b8c5-1a18612dcfe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e901587fd87545d2b2c4a7872915b1fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eedb5055-9021-42eb-9b17-92a00f71b776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d31406d9-2e8a-4eee-976d-920afbb6ae70, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=551e9a57-22c5-4205-89e8-c32eddb64e70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.896 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 551e9a57-22c5-4205-89e8-c32eddb64e70 in datapath acc66d85-ef8e-407d-9c47-b9f74fa426ba unbound from our chassis#033[00m
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.897 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acc66d85-ef8e-407d-9c47-b9f74fa426ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.898 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[105f6779-9102-42ba-bf29-be1898a30ec2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:24.899 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba namespace which is not needed anymore#033[00m
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.908 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:24 np0005539552 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000029.scope: Deactivated successfully.
Nov 29 03:03:24 np0005539552 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000029.scope: Consumed 14.561s CPU time.
Nov 29 03:03:24 np0005539552 systemd-machined[196379]: Machine qemu-14-instance-00000029 terminated.
Nov 29 03:03:24 np0005539552 nova_compute[233724]: 2025-11-29 08:03:24.952 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [NOTICE]   (252700) : haproxy version is 2.8.14-c23fe91
Nov 29 03:03:25 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [NOTICE]   (252700) : path to executable is /usr/sbin/haproxy
Nov 29 03:03:25 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [WARNING]  (252700) : Exiting Master process...
Nov 29 03:03:25 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [ALERT]    (252700) : Current worker (252702) exited with code 143 (Terminated)
Nov 29 03:03:25 np0005539552 neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba[252696]: [WARNING]  (252700) : All workers exited. Exiting... (0)
Nov 29 03:03:25 np0005539552 systemd[1]: libpod-dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15.scope: Deactivated successfully.
Nov 29 03:03:25 np0005539552 podman[253086]: 2025-11-29 08:03:25.034253322 +0000 UTC m=+0.047134530 container died dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:03:25 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15-userdata-shm.mount: Deactivated successfully.
Nov 29 03:03:25 np0005539552 systemd[1]: var-lib-containers-storage-overlay-3484d8e45027f3b6c392061be821270d8d64e91a2c4fd493bca355de2d4d22d0-merged.mount: Deactivated successfully.
Nov 29 03:03:25 np0005539552 podman[253086]: 2025-11-29 08:03:25.070388053 +0000 UTC m=+0.083269261 container cleanup dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:03:25 np0005539552 systemd[1]: libpod-conmon-dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15.scope: Deactivated successfully.
Nov 29 03:03:25 np0005539552 podman[253118]: 2025-11-29 08:03:25.127518014 +0000 UTC m=+0.038344502 container remove dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.136 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8d19f7bf-317f-4a9f-9cb6-426b8925d7b7]: (4, ('Sat Nov 29 08:03:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba (dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15)\ndd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15\nSat Nov 29 08:03:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba (dd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15)\ndd3d1cd7d87c88fd3268c8e5602ebf6829aba7bdea1419aac4a65f5542460a15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.139 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[80f0cc20-d69f-4782-85ad-a3d430c56b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.140 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacc66d85-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539552 kernel: tapacc66d85-e0: left promiscuous mode
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.161 233728 INFO nova.virt.libvirt.driver [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Instance destroyed successfully.#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.161 233728 DEBUG nova.objects.instance [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lazy-loading 'resources' on Instance uuid 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.162 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.164 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f9223a07-c75a-4d48-91c8-bac21c02082c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.176 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f02c2ac5-cf2b-4523-8410-a12d3a13398a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.177 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6515685b-5757-4397-a26f-c70edf77439c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.189 233728 DEBUG nova.virt.libvirt.vif [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=41,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXfnXUQg/8SVHHAlptINDtoKsSKar65f7U2lB7/tTeS26DoL2HfSP0qwoOh8GzjPvaglCUIpTVWlvLX5KACLBdq1hIM3lqVjFvjnBEMKVYrZ6XLLSl5yo+4Avh/c3dNTQ==',key_name='tempest-keypair-298047404',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:03:01Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e901587fd87545d2b2c4a7872915b1fb',ramdisk_id='',reservation_id='r-inj3yv65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-36746948',owner_user_name='tempest-ServersV294TestFqdnHostnames-36746948-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:03:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1725cd9cd6474f59b213ec05ccd5c878',uuid=81e12fcb-7c4c-4214-b8c5-1a18612dcfe1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.190 233728 DEBUG nova.network.os_vif_util [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Converting VIF {"id": "551e9a57-22c5-4205-89e8-c32eddb64e70", "address": "fa:16:3e:58:6f:67", "network": {"id": "acc66d85-ef8e-407d-9c47-b9f74fa426ba", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-538653727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e901587fd87545d2b2c4a7872915b1fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap551e9a57-22", "ovs_interfaceid": "551e9a57-22c5-4205-89e8-c32eddb64e70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.191 233728 DEBUG nova.network.os_vif_util [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.191 233728 DEBUG os_vif [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.192 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d430e2e5-737a-4157-80fc-1da7445bb546]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642545, 'reachable_time': 40581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253145, 'error': None, 'target': 'ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.193 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap551e9a57-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.195 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539552 systemd[1]: run-netns-ovnmeta\x2dacc66d85\x2def8e\x2d407d\x2d9c47\x2db9f74fa426ba.mount: Deactivated successfully.
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.196 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-acc66d85-ef8e-407d-9c47-b9f74fa426ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:03:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:25.196 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[358146ee-7264-4123-aeda-47b6b1733ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.200 233728 INFO os_vif [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:6f:67,bridge_name='br-int',has_traffic_filtering=True,id=551e9a57-22c5-4205-89e8-c32eddb64e70,network=Network(acc66d85-ef8e-407d-9c47-b9f74fa426ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap551e9a57-22')#033[00m
Nov 29 03:03:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.383 233728 DEBUG nova.compute.manager [req-af779b24-e5ad-4571-ab6c-62ecac1e2cbe req-f40e0ce8-1c2d-4f9f-a9a5-c0a9c51f7fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-vif-unplugged-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.384 233728 DEBUG oslo_concurrency.lockutils [req-af779b24-e5ad-4571-ab6c-62ecac1e2cbe req-f40e0ce8-1c2d-4f9f-a9a5-c0a9c51f7fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.384 233728 DEBUG oslo_concurrency.lockutils [req-af779b24-e5ad-4571-ab6c-62ecac1e2cbe req-f40e0ce8-1c2d-4f9f-a9a5-c0a9c51f7fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.384 233728 DEBUG oslo_concurrency.lockutils [req-af779b24-e5ad-4571-ab6c-62ecac1e2cbe req-f40e0ce8-1c2d-4f9f-a9a5-c0a9c51f7fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.385 233728 DEBUG nova.compute.manager [req-af779b24-e5ad-4571-ab6c-62ecac1e2cbe req-f40e0ce8-1c2d-4f9f-a9a5-c0a9c51f7fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] No waiting events found dispatching network-vif-unplugged-551e9a57-22c5-4205-89e8-c32eddb64e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:25 np0005539552 nova_compute[233724]: 2025-11-29 08:03:25.385 233728 DEBUG nova.compute.manager [req-af779b24-e5ad-4571-ab6c-62ecac1e2cbe req-f40e0ce8-1c2d-4f9f-a9a5-c0a9c51f7fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-vif-unplugged-551e9a57-22c5-4205-89e8-c32eddb64e70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:03:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:26.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:26.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.262 233728 INFO nova.virt.libvirt.driver [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Deleting instance files /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_del#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.263 233728 INFO nova.virt.libvirt.driver [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Deletion of /var/lib/nova/instances/81e12fcb-7c4c-4214-b8c5-1a18612dcfe1_del complete#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.319 233728 INFO nova.compute.manager [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Took 2.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.319 233728 DEBUG oslo.service.loopingcall [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.320 233728 DEBUG nova.compute.manager [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.320 233728 DEBUG nova.network.neutron [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.491 233728 DEBUG nova.compute.manager [req-7a56ac5e-ba84-4bd0-8791-8dacf5449831 req-739dad84-d918-49ed-8d09-9aaace50cb8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.491 233728 DEBUG oslo_concurrency.lockutils [req-7a56ac5e-ba84-4bd0-8791-8dacf5449831 req-739dad84-d918-49ed-8d09-9aaace50cb8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.491 233728 DEBUG oslo_concurrency.lockutils [req-7a56ac5e-ba84-4bd0-8791-8dacf5449831 req-739dad84-d918-49ed-8d09-9aaace50cb8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.492 233728 DEBUG oslo_concurrency.lockutils [req-7a56ac5e-ba84-4bd0-8791-8dacf5449831 req-739dad84-d918-49ed-8d09-9aaace50cb8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.492 233728 DEBUG nova.compute.manager [req-7a56ac5e-ba84-4bd0-8791-8dacf5449831 req-739dad84-d918-49ed-8d09-9aaace50cb8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] No waiting events found dispatching network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:27 np0005539552 nova_compute[233724]: 2025-11-29 08:03:27.492 233728 WARNING nova.compute.manager [req-7a56ac5e-ba84-4bd0-8791-8dacf5449831 req-739dad84-d918-49ed-8d09-9aaace50cb8a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received unexpected event network-vif-plugged-551e9a57-22c5-4205-89e8-c32eddb64e70 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.193 233728 DEBUG nova.network.neutron [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.215 233728 INFO nova.compute.manager [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.269 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.270 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.284 233728 DEBUG nova.compute.manager [req-f8f5f29d-a37f-4a82-bc3e-51c66d0e9fc7 req-b4ef4f4e-0898-40f8-acbe-cd80e627fb76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Received event network-vif-deleted-551e9a57-22c5-4205-89e8-c32eddb64e70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.336 233728 DEBUG oslo_concurrency.processutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:28.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:28.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/781436935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.760 233728 DEBUG oslo_concurrency.processutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.766 233728 DEBUG nova.compute.provider_tree [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.781 233728 DEBUG nova.scheduler.client.report [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.805 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.828 233728 INFO nova.scheduler.client.report [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Deleted allocations for instance 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1#033[00m
Nov 29 03:03:28 np0005539552 nova_compute[233724]: 2025-11-29 08:03:28.923 233728 DEBUG oslo_concurrency.lockutils [None req-006729dd-70af-4550-bafd-64a396f57d7e 1725cd9cd6474f59b213ec05ccd5c878 e901587fd87545d2b2c4a7872915b1fb - - default default] Lock "81e12fcb-7c4c-4214-b8c5-1a18612dcfe1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 03:03:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:29 np0005539552 nova_compute[233724]: 2025-11-29 08:03:29.954 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:30 np0005539552 nova_compute[233724]: 2025-11-29 08:03:30.195 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:03:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:32.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:03:33 np0005539552 nova_compute[233724]: 2025-11-29 08:03:33.230 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 03:03:33 np0005539552 nova_compute[233724]: 2025-11-29 08:03:33.308 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:34.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:34 np0005539552 nova_compute[233724]: 2025-11-29 08:03:34.956 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:35 np0005539552 nova_compute[233724]: 2025-11-29 08:03:35.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:36.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:38.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:03:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/596296980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:03:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:03:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/596296980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:03:39 np0005539552 nova_compute[233724]: 2025-11-29 08:03:39.959 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539552 nova_compute[233724]: 2025-11-29 08:03:40.157 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403405.1564627, 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:40 np0005539552 nova_compute[233724]: 2025-11-29 08:03:40.157 233728 INFO nova.compute.manager [-] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:03:40 np0005539552 nova_compute[233724]: 2025-11-29 08:03:40.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:40.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 03:03:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:40 np0005539552 nova_compute[233724]: 2025-11-29 08:03:40.952 233728 DEBUG nova.compute.manager [None req-15411c12-0ef8-477b-9185-a6ffb820dccc - - - - - -] [instance: 81e12fcb-7c4c-4214-b8c5-1a18612dcfe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:41 np0005539552 podman[253297]: 2025-11-29 08:03:41.977026187 +0000 UTC m=+0.055991891 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:03:41 np0005539552 podman[253298]: 2025-11-29 08:03:41.997387849 +0000 UTC m=+0.075507760 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:03:41 np0005539552 podman[253299]: 2025-11-29 08:03:41.999328232 +0000 UTC m=+0.078002809 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:03:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:42.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:44.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:44 np0005539552 nova_compute[233724]: 2025-11-29 08:03:44.960 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539552 nova_compute[233724]: 2025-11-29 08:03:45.199 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.568 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "d330098d-4454-4a83-8b6f-bc9828837e48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.568 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "d330098d-4454-4a83-8b6f-bc9828837e48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.584 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:46.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.682 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.682 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.689 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.689 233728 INFO nova.compute.claims [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:03:46 np0005539552 nova_compute[233724]: 2025-11-29 08:03:46.890 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3786934929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.328 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.333 233728 DEBUG nova.compute.provider_tree [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.354 233728 DEBUG nova.scheduler.client.report [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.376 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.377 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.420 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.420 233728 DEBUG nova.network.neutron [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.439 233728 INFO nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.464 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.566 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.568 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.569 233728 INFO nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Creating image(s)#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.593 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.621 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.646 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.650 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.710 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.712 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.712 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.713 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.739 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.743 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d330098d-4454-4a83-8b6f-bc9828837e48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.836 233728 DEBUG nova.network.neutron [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:03:47 np0005539552 nova_compute[233724]: 2025-11-29 08:03:47.837 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.189 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d330098d-4454-4a83-8b6f-bc9828837e48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.267 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] resizing rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:48.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.491 233728 DEBUG nova.objects.instance [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'migration_context' on Instance uuid d330098d-4454-4a83-8b6f-bc9828837e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.520 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.520 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Ensure instance console log exists: /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.521 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.521 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.521 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.523 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.527 233728 WARNING nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.535 233728 DEBUG nova.virt.libvirt.host [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.535 233728 DEBUG nova.virt.libvirt.host [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.538 233728 DEBUG nova.virt.libvirt.host [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.538 233728 DEBUG nova.virt.libvirt.host [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.539 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.539 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.540 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.540 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.540 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.540 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.540 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.541 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.541 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.541 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.541 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.541 233728 DEBUG nova.virt.hardware [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.544 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:48.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/448252374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:48 np0005539552 nova_compute[233724]: 2025-11-29 08:03:48.985 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.011 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.014 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3247694529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.587 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.589 233728 DEBUG nova.objects.instance [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'pci_devices' on Instance uuid d330098d-4454-4a83-8b6f-bc9828837e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.603 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <uuid>d330098d-4454-4a83-8b6f-bc9828837e48</uuid>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <name>instance-0000002e</name>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1302179622</nova:name>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:03:48</nova:creationTime>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:user uuid="1b85c3911b7c4e558779a15904c3ce58">tempest-ServersOnMultiNodesTest-648608509-project-member</nova:user>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <nova:project uuid="4fe9ef6d6ed6441e87cf5bdb5d40af4b">tempest-ServersOnMultiNodesTest-648608509</nova:project>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <entry name="serial">d330098d-4454-4a83-8b6f-bc9828837e48</entry>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <entry name="uuid">d330098d-4454-4a83-8b6f-bc9828837e48</entry>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d330098d-4454-4a83-8b6f-bc9828837e48_disk">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d330098d-4454-4a83-8b6f-bc9828837e48_disk.config">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/console.log" append="off"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:03:49 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:03:49 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:03:49 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:03:49 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.660 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.660 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.661 233728 INFO nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Using config drive#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.682 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.940 233728 INFO nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Creating config drive at /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/disk.config#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.946 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0l1w02pr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:49 np0005539552 nova_compute[233724]: 2025-11-29 08:03:49.964 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539552 nova_compute[233724]: 2025-11-29 08:03:50.072 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0l1w02pr" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:50 np0005539552 nova_compute[233724]: 2025-11-29 08:03:50.103 233728 DEBUG nova.storage.rbd_utils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image d330098d-4454-4a83-8b6f-bc9828837e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:50 np0005539552 nova_compute[233724]: 2025-11-29 08:03:50.107 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/disk.config d330098d-4454-4a83-8b6f-bc9828837e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:50 np0005539552 nova_compute[233724]: 2025-11-29 08:03:50.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:50.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 03:03:51 np0005539552 nova_compute[233724]: 2025-11-29 08:03:51.947 233728 DEBUG oslo_concurrency.processutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/disk.config d330098d-4454-4a83-8b6f-bc9828837e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:51 np0005539552 nova_compute[233724]: 2025-11-29 08:03:51.948 233728 INFO nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Deleting local config drive /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48/disk.config because it was imported into RBD.#033[00m
Nov 29 03:03:52 np0005539552 systemd-machined[196379]: New machine qemu-15-instance-0000002e.
Nov 29 03:03:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:52 np0005539552 systemd[1]: Started Virtual Machine qemu-15-instance-0000002e.
Nov 29 03:03:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:52.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:53.189 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.190 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:53.191 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.677 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403433.6755874, d330098d-4454-4a83-8b6f-bc9828837e48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.678 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.680 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.681 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.685 233728 INFO nova.virt.libvirt.driver [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance spawned successfully.#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.685 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.721 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.724 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.769 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.769 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.770 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.770 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.770 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:53 np0005539552 nova_compute[233724]: 2025-11-29 08:03:53.771 233728 DEBUG nova.virt.libvirt.driver [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.098 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.099 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403433.6759427, d330098d-4454-4a83-8b6f-bc9828837e48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.099 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.513 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.517 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.549 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.553 233728 INFO nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Took 6.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.553 233728 DEBUG nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.611 233728 INFO nova.compute.manager [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Took 7.97 seconds to build instance.#033[00m
Nov 29 03:03:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.630 233728 DEBUG oslo_concurrency.lockutils [None req-984c606b-c680-4570-bee4-dbd9848d4328 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "d330098d-4454-4a83-8b6f-bc9828837e48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:54 np0005539552 nova_compute[233724]: 2025-11-29 08:03:54.964 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:55 np0005539552 nova_compute[233724]: 2025-11-29 08:03:55.210 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:03:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:56.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:03:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 03:03:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:03:57.193 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.467 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.468 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:58.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.485 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.558 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.559 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.565 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.565 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.571 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.571 233728 INFO nova.compute.claims [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.576 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:03:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.655 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:58 np0005539552 nova_compute[233724]: 2025-11-29 08:03:58.765 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/537045017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.238 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.245 233728 DEBUG nova.compute.provider_tree [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.261 233728 DEBUG nova.scheduler.client.report [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.288 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.289 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.296 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.297 233728 INFO nova.compute.claims [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.378 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "ef00ad3c-e95a-4769-a9cd-8693a87415c8" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.379 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "ef00ad3c-e95a-4769-a9cd-8693a87415c8" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.425 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "ef00ad3c-e95a-4769-a9cd-8693a87415c8" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.426 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.486 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.487 233728 DEBUG nova.network.neutron [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.508 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.530 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.554 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.639 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.640 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.640 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Creating image(s)#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.663 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.697 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.721 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.725 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.795 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.796 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.797 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.797 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.818 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.820 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:59 np0005539552 nova_compute[233724]: 2025-11-29 08:03:59.965 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2950994176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.004 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.011 233728 DEBUG nova.compute.provider_tree [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.068 233728 DEBUG nova.scheduler.client.report [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.095 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.109 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "ef00ad3c-e95a-4769-a9cd-8693a87415c8" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.109 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "ef00ad3c-e95a-4769-a9cd-8693a87415c8" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.128 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "ef00ad3c-e95a-4769-a9cd-8693a87415c8" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.129 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.133 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.204 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] resizing rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.238 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.248 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.249 233728 DEBUG nova.network.neutron [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.313 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.321 233728 DEBUG nova.objects.instance [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'migration_context' on Instance uuid 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.337 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.367 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.367 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Ensure instance console log exists: /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.368 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.368 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.368 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.394 233728 DEBUG nova.network.neutron [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.395 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.396 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.400 233728 WARNING nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.403 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.404 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.413 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.413 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.414 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.414 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.414 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.414 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.415 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.415 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.415 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.415 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.415 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.415 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.416 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.416 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.419 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.453 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.455 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.456 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Creating image(s)#033[00m
Nov 29 03:04:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:00.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.488 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.518 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.546 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.551 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.614 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.616 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.617 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.617 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.644 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.647 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.677 233728 DEBUG nova.network.neutron [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.678 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3783252782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:00 np0005539552 nova_compute[233724]: 2025-11-29 08:04:00.856 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:01 np0005539552 nova_compute[233724]: 2025-11-29 08:04:01.170 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:01 np0005539552 nova_compute[233724]: 2025-11-29 08:04:01.175 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2178921750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:01 np0005539552 nova_compute[233724]: 2025-11-29 08:04:01.649 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:01 np0005539552 nova_compute[233724]: 2025-11-29 08:04:01.652 233728 DEBUG nova.objects.instance [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:01 np0005539552 nova_compute[233724]: 2025-11-29 08:04:01.770 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <uuid>3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2</uuid>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <name>instance-00000030</name>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersOnMultiNodesTest-server-250132109-1</nova:name>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:04:00</nova:creationTime>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:user uuid="1b85c3911b7c4e558779a15904c3ce58">tempest-ServersOnMultiNodesTest-648608509-project-member</nova:user>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <nova:project uuid="4fe9ef6d6ed6441e87cf5bdb5d40af4b">tempest-ServersOnMultiNodesTest-648608509</nova:project>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <entry name="serial">3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2</entry>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <entry name="uuid">3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2</entry>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk.config">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/console.log" append="off"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:04:01 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:04:01 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:04:01 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:04:01 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.267 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.268 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.269 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Using config drive#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.301 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.384 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.455 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] resizing rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.834 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Creating config drive at /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/disk.config#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.839 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp81fmnafw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:02 np0005539552 nova_compute[233724]: 2025-11-29 08:04:02.966 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp81fmnafw" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.000 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.003 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/disk.config 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.202 233728 DEBUG nova.objects.instance [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'migration_context' on Instance uuid c20f05a7-0dec-449a-92d5-a73494ab9b6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.217 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.217 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Ensure instance console log exists: /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.218 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.218 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.218 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.221 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.226 233728 WARNING nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.230 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.231 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.234 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.235 233728 DEBUG nova.virt.libvirt.host [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.236 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.236 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.236 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.236 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.237 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.237 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.237 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.237 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.237 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.237 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.238 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.238 233728 DEBUG nova.virt.hardware [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.240 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.500 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/disk.config 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.502 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Deleting local config drive /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:03 np0005539552 systemd-machined[196379]: New machine qemu-16-instance-00000030.
Nov 29 03:04:03 np0005539552 systemd[1]: Started Virtual Machine qemu-16-instance-00000030.
Nov 29 03:04:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1989409307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.692 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.721 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.726 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:03 np0005539552 nova_compute[233724]: 2025-11-29 08:04:03.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.002 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403444.0024297, 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.004 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.008 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.008 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.013 233728 INFO nova.virt.libvirt.driver [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Instance spawned successfully.#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.014 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.072 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.079 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.080 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.080 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.081 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.081 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.082 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.086 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.130 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.131 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403444.003462, 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.131 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2685925987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.175 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.176 233728 DEBUG nova.objects.instance [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'pci_devices' on Instance uuid c20f05a7-0dec-449a-92d5-a73494ab9b6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.255 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.259 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.272 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <uuid>c20f05a7-0dec-449a-92d5-a73494ab9b6c</uuid>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <name>instance-00000031</name>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersOnMultiNodesTest-server-250132109-2</nova:name>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:04:03</nova:creationTime>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:user uuid="1b85c3911b7c4e558779a15904c3ce58">tempest-ServersOnMultiNodesTest-648608509-project-member</nova:user>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <nova:project uuid="4fe9ef6d6ed6441e87cf5bdb5d40af4b">tempest-ServersOnMultiNodesTest-648608509</nova:project>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <entry name="serial">c20f05a7-0dec-449a-92d5-a73494ab9b6c</entry>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <entry name="uuid">c20f05a7-0dec-449a-92d5-a73494ab9b6c</entry>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk.config">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/console.log" append="off"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:04:04 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:04:04 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:04:04 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:04:04 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.388 233728 INFO nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Took 4.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.388 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:04:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:04.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.618 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:04.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.720 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.721 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.721 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Using config drive#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.747 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.759 233728 INFO nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Took 6.22 seconds to build instance.#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.778 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.928 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Creating config drive at /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/disk.config#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.935 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpv45vhq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:04 np0005539552 nova_compute[233724]: 2025-11-29 08:04:04.967 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539552 nova_compute[233724]: 2025-11-29 08:04:05.062 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpv45vhq" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:05 np0005539552 nova_compute[233724]: 2025-11-29 08:04:05.092 233728 DEBUG nova.storage.rbd_utils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:05 np0005539552 nova_compute[233724]: 2025-11-29 08:04:05.098 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/disk.config c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:05 np0005539552 nova_compute[233724]: 2025-11-29 08:04:05.239 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539552 nova_compute[233724]: 2025-11-29 08:04:05.252 233728 DEBUG oslo_concurrency.processutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/disk.config c20f05a7-0dec-449a-92d5-a73494ab9b6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:05 np0005539552 nova_compute[233724]: 2025-11-29 08:04:05.253 233728 INFO nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Deleting local config drive /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:05 np0005539552 systemd-machined[196379]: New machine qemu-17-instance-00000031.
Nov 29 03:04:05 np0005539552 systemd[1]: Started Virtual Machine qemu-17-instance-00000031.
Nov 29 03:04:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.410 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.410 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.411 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403446.409816, c20f05a7-0dec-449a-92d5-a73494ab9b6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.411 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.416 233728 INFO nova.virt.libvirt.driver [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Instance spawned successfully.#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.416 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.434 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.440 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.444 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.444 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.445 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.445 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.446 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.446 233728 DEBUG nova.virt.libvirt.driver [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.471 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.472 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403446.4100168, c20f05a7-0dec-449a-92d5-a73494ab9b6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.473 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.494 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.498 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.508 233728 INFO nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Took 6.05 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.508 233728 DEBUG nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.519 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.570 233728 INFO nova.compute.manager [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Took 7.94 seconds to build instance.#033[00m
Nov 29 03:04:06 np0005539552 nova_compute[233724]: 2025-11-29 08:04:06.588 233728 DEBUG oslo_concurrency.lockutils [None req-2386c265-6489-443f-ba35-c31e081efb5a 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:04:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:06.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:04:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 03:04:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:08.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:08.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 03:04:09 np0005539552 nova_compute[233724]: 2025-11-29 08:04:09.968 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 03:04:10 np0005539552 nova_compute[233724]: 2025-11-29 08:04:10.265 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:10.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:04:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:12.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:04:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:12.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:12 np0005539552 podman[254580]: 2025-11-29 08:04:12.984481761 +0000 UTC m=+0.068328875 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:04:12 np0005539552 podman[254579]: 2025-11-29 08:04:12.987980096 +0000 UTC m=+0.071852701 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:04:13 np0005539552 podman[254581]: 2025-11-29 08:04:13.006981622 +0000 UTC m=+0.088309118 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:13 np0005539552 nova_compute[233724]: 2025-11-29 08:04:13.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:14 np0005539552 nova_compute[233724]: 2025-11-29 08:04:14.971 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:14 np0005539552 nova_compute[233724]: 2025-11-29 08:04:14.972 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539552 nova_compute[233724]: 2025-11-29 08:04:15.266 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539552 nova_compute[233724]: 2025-11-29 08:04:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:15 np0005539552 nova_compute[233724]: 2025-11-29 08:04:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 03:04:16 np0005539552 nova_compute[233724]: 2025-11-29 08:04:16.437 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:16 np0005539552 nova_compute[233724]: 2025-11-29 08:04:16.438 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:16 np0005539552 nova_compute[233724]: 2025-11-29 08:04:16.438 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:16 np0005539552 nova_compute[233724]: 2025-11-29 08:04:16.439 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:04:16 np0005539552 nova_compute[233724]: 2025-11-29 08:04:16.439 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:16.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:16.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/333920296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:16 np0005539552 nova_compute[233724]: 2025-11-29 08:04:16.956 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:17Z|00157|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.398 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.398 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.402 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.403 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.405 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.406 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.555 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.557 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4206MB free_disk=20.78929901123047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.557 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.558 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.710 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance d330098d-4454-4a83-8b6f-bc9828837e48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.711 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.711 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance c20f05a7-0dec-449a-92d5-a73494ab9b6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.712 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.712 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:04:17 np0005539552 nova_compute[233724]: 2025-11-29 08:04:17.832 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 03:04:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3593772647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:18 np0005539552 nova_compute[233724]: 2025-11-29 08:04:18.370 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:18 np0005539552 nova_compute[233724]: 2025-11-29 08:04:18.376 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:18 np0005539552 nova_compute[233724]: 2025-11-29 08:04:18.399 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:18 np0005539552 nova_compute[233724]: 2025-11-29 08:04:18.461 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:04:18 np0005539552 nova_compute[233724]: 2025-11-29 08:04:18.462 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:18.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.464 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.464 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.464 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.614 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-d330098d-4454-4a83-8b6f-bc9828837e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.614 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-d330098d-4454-4a83-8b6f-bc9828837e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.614 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.615 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d330098d-4454-4a83-8b6f-bc9828837e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:19 np0005539552 nova_compute[233724]: 2025-11-29 08:04:19.973 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.214 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.323 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:20.613 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:20.613 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:20.613 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.646 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.661 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-d330098d-4454-4a83-8b6f-bc9828837e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.661 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.662 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.662 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.662 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.662 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:20 np0005539552 nova_compute[233724]: 2025-11-29 08:04:20.662 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:04:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:20.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 03:04:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 03:04:22 np0005539552 nova_compute[233724]: 2025-11-29 08:04:22.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:22 np0005539552 nova_compute[233724]: 2025-11-29 08:04:22.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:22.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:24 np0005539552 nova_compute[233724]: 2025-11-29 08:04:24.975 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539552 nova_compute[233724]: 2025-11-29 08:04:25.324 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 03:04:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:26.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:28.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:29 np0005539552 nova_compute[233724]: 2025-11-29 08:04:29.978 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.326 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:30.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.796 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.796 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.819 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.913 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.913 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.923 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:30 np0005539552 nova_compute[233724]: 2025-11-29 08:04:30.923 233728 INFO nova.compute.claims [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:04:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.255 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4118793655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.687 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.694 233728 DEBUG nova.compute.provider_tree [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.715 233728 DEBUG nova.scheduler.client.report [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.738 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.739 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.801 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.802 233728 DEBUG nova.network.neutron [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.827 233728 INFO nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.852 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.964 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.974 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.976 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:31 np0005539552 nova_compute[233724]: 2025-11-29 08:04:31.977 233728 INFO nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Creating image(s)#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.007 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.031 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.055 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.059 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.115 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.116 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.117 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.117 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.148 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.151 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.225 233728 DEBUG nova.policy [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fddc5f5801764ee19d5253e2cab34df3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '638fd52fccf14f16b56d0860553063f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.338 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.339 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.339 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.340 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.340 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.341 233728 INFO nova.compute.manager [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Terminating instance#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.342 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "refresh_cache-3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.342 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquired lock "refresh_cache-3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.343 233728 DEBUG nova.network.neutron [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.521 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.522 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.522 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.523 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.523 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.525 233728 INFO nova.compute.manager [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Terminating instance#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.527 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "refresh_cache-c20f05a7-0dec-449a-92d5-a73494ab9b6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.528 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquired lock "refresh_cache-c20f05a7-0dec-449a-92d5-a73494ab9b6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.528 233728 DEBUG nova.network.neutron [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.531 233728 DEBUG nova.network.neutron [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:32.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.785 233728 DEBUG nova.network.neutron [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.888 233728 DEBUG nova.network.neutron [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Successfully created port: 0ed27b66-0e55-4592-aa78-847e3b01509f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.929 233728 DEBUG nova.network.neutron [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.951 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Releasing lock "refresh_cache-3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:32 np0005539552 nova_compute[233724]: 2025-11-29 08:04:32.952 233728 DEBUG nova.compute.manager [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:33 np0005539552 nova_compute[233724]: 2025-11-29 08:04:33.014 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:33 np0005539552 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 29 03:04:33 np0005539552 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000030.scope: Consumed 14.183s CPU time.
Nov 29 03:04:33 np0005539552 systemd-machined[196379]: Machine qemu-16-instance-00000030 terminated.
Nov 29 03:04:33 np0005539552 nova_compute[233724]: 2025-11-29 08:04:33.122 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] resizing rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:33 np0005539552 nova_compute[233724]: 2025-11-29 08:04:33.901 233728 DEBUG nova.network.neutron [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:33 np0005539552 nova_compute[233724]: 2025-11-29 08:04:33.910 233728 INFO nova.virt.libvirt.driver [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Instance destroyed successfully.#033[00m
Nov 29 03:04:33 np0005539552 nova_compute[233724]: 2025-11-29 08:04:33.911 233728 DEBUG nova.objects.instance [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'resources' on Instance uuid 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.174 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Releasing lock "refresh_cache-c20f05a7-0dec-449a-92d5-a73494ab9b6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.175 233728 DEBUG nova.compute.manager [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.254 233728 DEBUG nova.objects.instance [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'migration_context' on Instance uuid d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.304 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.304 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Ensure instance console log exists: /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.305 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.305 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.305 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:34 np0005539552 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 29 03:04:34 np0005539552 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000031.scope: Consumed 15.228s CPU time.
Nov 29 03:04:34 np0005539552 systemd-machined[196379]: Machine qemu-17-instance-00000031 terminated.
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.465 233728 INFO nova.virt.libvirt.driver [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Instance destroyed successfully.#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.466 233728 DEBUG nova.objects.instance [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'resources' on Instance uuid c20f05a7-0dec-449a-92d5-a73494ab9b6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.494 233728 DEBUG nova.network.neutron [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Successfully updated port: 0ed27b66-0e55-4592-aa78-847e3b01509f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.507 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "refresh_cache-d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.508 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquired lock "refresh_cache-d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.508 233728 DEBUG nova.network.neutron [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.573 233728 DEBUG nova.compute.manager [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received event network-changed-0ed27b66-0e55-4592-aa78-847e3b01509f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.573 233728 DEBUG nova.compute.manager [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Refreshing instance network info cache due to event network-changed-0ed27b66-0e55-4592-aa78-847e3b01509f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.574 233728 DEBUG oslo_concurrency.lockutils [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.672 233728 INFO nova.virt.libvirt.driver [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Deleting instance files /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_del#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.673 233728 INFO nova.virt.libvirt.driver [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Deletion of /var/lib/nova/instances/3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2_del complete#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.676 233728 DEBUG nova.network.neutron [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:34.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.733 233728 INFO nova.compute.manager [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Took 1.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.734 233728 DEBUG oslo.service.loopingcall [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.734 233728 DEBUG nova.compute.manager [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.734 233728 DEBUG nova.network.neutron [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.845 233728 DEBUG nova.network.neutron [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.867 233728 DEBUG nova.network.neutron [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.881 233728 INFO nova.compute.manager [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Took 0.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.906 233728 INFO nova.virt.libvirt.driver [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Deleting instance files /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c_del#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.907 233728 INFO nova.virt.libvirt.driver [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Deletion of /var/lib/nova/instances/c20f05a7-0dec-449a-92d5-a73494ab9b6c_del complete#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.949 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.949 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.980 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.989 233728 INFO nova.compute.manager [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.990 233728 DEBUG oslo.service.loopingcall [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.990 233728 DEBUG nova.compute.manager [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:34 np0005539552 nova_compute[233724]: 2025-11-29 08:04:34.990 233728 DEBUG nova.network.neutron [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.052 233728 DEBUG oslo_concurrency.processutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.124 233728 DEBUG nova.network.neutron [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.140 233728 DEBUG nova.network.neutron [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.156 233728 INFO nova.compute.manager [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.209 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1687457659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.328 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.390 233728 DEBUG nova.network.neutron [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Updating instance_info_cache with network_info: [{"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.410 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Releasing lock "refresh_cache-d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.411 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance network_info: |[{"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.411 233728 DEBUG oslo_concurrency.lockutils [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.411 233728 DEBUG nova.network.neutron [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Refreshing network info cache for port 0ed27b66-0e55-4592-aa78-847e3b01509f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.414 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Start _get_guest_xml network_info=[{"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.420 233728 WARNING nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.427 233728 DEBUG nova.virt.libvirt.host [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.428 233728 DEBUG nova.virt.libvirt.host [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.431 233728 DEBUG nova.virt.libvirt.host [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.431 233728 DEBUG nova.virt.libvirt.host [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.433 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.433 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.433 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.433 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.434 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.434 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.434 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.434 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.434 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.435 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.435 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.435 233728 DEBUG nova.virt.hardware [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.439 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/421999966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.487 233728 DEBUG oslo_concurrency.processutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.494 233728 DEBUG nova.compute.provider_tree [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.515 233728 DEBUG nova.scheduler.client.report [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.554 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.557 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.585 233728 INFO nova.scheduler.client.report [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Deleted allocations for instance 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.647 233728 DEBUG oslo_concurrency.processutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.677 233728 DEBUG oslo_concurrency.lockutils [None req-2907e39d-8e2a-472d-ba58-f986c56ec746 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2395439962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:35 np0005539552 nova_compute[233724]: 2025-11-29 08:04:35.983 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.004 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.008 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1878998816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.089 233728 DEBUG oslo_concurrency.processutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.095 233728 DEBUG nova.compute.provider_tree [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.124 233728 DEBUG nova.scheduler.client.report [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.145 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.170 233728 INFO nova.scheduler.client.report [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Deleted allocations for instance c20f05a7-0dec-449a-92d5-a73494ab9b6c#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.229 233728 DEBUG oslo_concurrency.lockutils [None req-a9cb5a08-5239-4a71-a110-a2ce90ab2be2 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "c20f05a7-0dec-449a-92d5-a73494ab9b6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 03:04:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/448267313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.432 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.433 233728 DEBUG nova.virt.libvirt.vif [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-803337407',display_name='tempest-ImagesTestJSON-server-803337407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-803337407',id=54,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-ut2jfefv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:31Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.433 233728 DEBUG nova.network.os_vif_util [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.434 233728 DEBUG nova.network.os_vif_util [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.435 233728 DEBUG nova.objects.instance [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.451 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <uuid>d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa</uuid>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <name>instance-00000036</name>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:name>tempest-ImagesTestJSON-server-803337407</nova:name>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:04:35</nova:creationTime>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:user uuid="fddc5f5801764ee19d5253e2cab34df3">tempest-ImagesTestJSON-1682881466-project-member</nova:user>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:project uuid="638fd52fccf14f16b56d0860553063f3">tempest-ImagesTestJSON-1682881466</nova:project>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <nova:port uuid="0ed27b66-0e55-4592-aa78-847e3b01509f">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <entry name="serial">d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa</entry>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <entry name="uuid">d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa</entry>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk.config">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:53:78:dd"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <target dev="tap0ed27b66-0e"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/console.log" append="off"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:04:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:04:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:04:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:04:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.453 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Preparing to wait for external event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.453 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.453 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.453 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.454 233728 DEBUG nova.virt.libvirt.vif [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-803337407',display_name='tempest-ImagesTestJSON-server-803337407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-803337407',id=54,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-ut2jfefv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:31Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.454 233728 DEBUG nova.network.os_vif_util [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.455 233728 DEBUG nova.network.os_vif_util [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.455 233728 DEBUG os_vif [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.456 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.456 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.460 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ed27b66-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.461 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ed27b66-0e, col_values=(('external_ids', {'iface-id': '0ed27b66-0e55-4592-aa78-847e3b01509f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:78:dd', 'vm-uuid': 'd9ae79a9-263e-4f0c-8538-a4f6b99a3cfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:36 np0005539552 NetworkManager[48926]: <info>  [1764403476.4631] manager: (tap0ed27b66-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.465 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.469 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.470 233728 INFO os_vif [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e')#033[00m
Nov 29 03:04:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.518 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.519 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.519 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No VIF found with MAC fa:16:3e:53:78:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.519 233728 INFO nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Using config drive#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.545 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:04:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.911 233728 INFO nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Creating config drive at /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/disk.config#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.915 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmjm5dhvb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.997 233728 DEBUG nova.network.neutron [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Updated VIF entry in instance network info cache for port 0ed27b66-0e55-4592-aa78-847e3b01509f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:36 np0005539552 nova_compute[233724]: 2025-11-29 08:04:36.998 233728 DEBUG nova.network.neutron [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Updating instance_info_cache with network_info: [{"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.020 233728 DEBUG oslo_concurrency.lockutils [req-dd25df10-e2f6-4e7b-9ce4-b3f383f3c317 req-3a9ce6da-4de5-4234-9c49-5b06e7d99923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.040 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmjm5dhvb" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.065 233728 DEBUG nova.storage.rbd_utils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.068 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/disk.config d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.242 233728 DEBUG oslo_concurrency.processutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/disk.config d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.243 233728 INFO nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Deleting local config drive /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.273 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "d330098d-4454-4a83-8b6f-bc9828837e48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.273 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "d330098d-4454-4a83-8b6f-bc9828837e48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.273 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "d330098d-4454-4a83-8b6f-bc9828837e48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.274 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "d330098d-4454-4a83-8b6f-bc9828837e48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.274 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "d330098d-4454-4a83-8b6f-bc9828837e48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.275 233728 INFO nova.compute.manager [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Terminating instance#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.276 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "refresh_cache-d330098d-4454-4a83-8b6f-bc9828837e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.276 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquired lock "refresh_cache-d330098d-4454-4a83-8b6f-bc9828837e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.276 233728 DEBUG nova.network.neutron [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:37 np0005539552 kernel: tap0ed27b66-0e: entered promiscuous mode
Nov 29 03:04:37 np0005539552 NetworkManager[48926]: <info>  [1764403477.2886] manager: (tap0ed27b66-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.289 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:37Z|00158|binding|INFO|Claiming lport 0ed27b66-0e55-4592-aa78-847e3b01509f for this chassis.
Nov 29 03:04:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:37Z|00159|binding|INFO|0ed27b66-0e55-4592-aa78-847e3b01509f: Claiming fa:16:3e:53:78:dd 10.100.0.5
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.295 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.300 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.302 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:78:dd 10.100.0.5'], port_security=['fa:16:3e:53:78:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd9ae79a9-263e-4f0c-8538-a4f6b99a3cfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0ed27b66-0e55-4592-aa78-847e3b01509f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.303 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0ed27b66-0e55-4592-aa78-847e3b01509f in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 bound to our chassis#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.304 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f01d29c1-afcb-4909-9abf-f7d31e4549d8#033[00m
Nov 29 03:04:37 np0005539552 systemd-udevd[255290]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:37 np0005539552 systemd-machined[196379]: New machine qemu-18-instance-00000036.
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.316 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d472dd73-d950-4d9a-90b9-2c0a5729d29b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.317 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf01d29c1-a1 in ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.321 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf01d29c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.321 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[afb18aae-77ba-49a9-8ee9-78200c5aa717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.323 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[39eccaf7-9567-439d-9ddb-d77b358a03a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 NetworkManager[48926]: <info>  [1764403477.3274] device (tap0ed27b66-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:37 np0005539552 NetworkManager[48926]: <info>  [1764403477.3282] device (tap0ed27b66-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:37 np0005539552 systemd[1]: Started Virtual Machine qemu-18-instance-00000036.
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.336 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[1547a9ec-6709-4bc8-9f27-c13549145101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.363 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b3cb147c-34e9-43d2-b956-9cebe84ca071]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.372 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:37Z|00160|binding|INFO|Setting lport 0ed27b66-0e55-4592-aa78-847e3b01509f ovn-installed in OVS
Nov 29 03:04:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:37Z|00161|binding|INFO|Setting lport 0ed27b66-0e55-4592-aa78-847e3b01509f up in Southbound
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.376 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.393 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e84ed5-2375-4c9e-b081-3cc562ef7829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.398 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec18160-912c-4f5f-b38f-afe4331b945d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 systemd-udevd[255293]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:37 np0005539552 NetworkManager[48926]: <info>  [1764403477.3992] manager: (tapf01d29c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.426 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2c277038-81ff-4dfe-8b8f-0562ee31c8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.429 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bdacad75-19f6-4dd6-816d-a7002831c4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 NetworkManager[48926]: <info>  [1764403477.4515] device (tapf01d29c1-a0): carrier: link connected
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.457 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[af09d9bd-ce4d-4edb-96c8-87db24fd2c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.475 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[641aa6cd-0225-4101-8ea2-640204a8bdbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652313, 'reachable_time': 25192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255322, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.488 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb9c094-caf4-484a-84bd-3dcf15b0b943]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:77b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652313, 'tstamp': 652313}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255323, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.503 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e72813-52b4-49b0-a63e-3b1538d41002]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652313, 'reachable_time': 25192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255324, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.528 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b0037979-83d9-4505-b1a2-39b2bd731fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.580 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[01ad9516-9ed2-4113-81bf-f50cb963f339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.581 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.581 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.582 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01d29c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.583 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 NetworkManager[48926]: <info>  [1764403477.5840] manager: (tapf01d29c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 03:04:37 np0005539552 kernel: tapf01d29c1-a0: entered promiscuous mode
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.587 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf01d29c1-a0, col_values=(('external_ids', {'iface-id': '2247adf2-4048-41de-ba3c-ac69d728838f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.588 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:37Z|00162|binding|INFO|Releasing lport 2247adf2-4048-41de-ba3c-ac69d728838f from this chassis (sb_readonly=0)
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.609 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.610 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[60b2806c-29a3-44b4-afec-c30aa70ec674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.611 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:37.613 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'env', 'PROCESS_TAG=haproxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f01d29c1-afcb-4909-9abf-f7d31e4549d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:37 np0005539552 nova_compute[233724]: 2025-11-29 08:04:37.869 233728 DEBUG nova.network.neutron [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:37 np0005539552 podman[255356]: 2025-11-29 08:04:37.952710026 +0000 UTC m=+0.049794426 container create b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:04:37 np0005539552 systemd[1]: Started libpod-conmon-b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b.scope.
Nov 29 03:04:38 np0005539552 podman[255356]: 2025-11-29 08:04:37.92509733 +0000 UTC m=+0.022181760 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:38 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:04:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bddc65343fa29c4399beb0a27fa36178a12a0ba056b636168ce3f5c663375b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:38 np0005539552 podman[255356]: 2025-11-29 08:04:38.042473739 +0000 UTC m=+0.139558159 container init b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:04:38 np0005539552 podman[255356]: 2025-11-29 08:04:38.048753048 +0000 UTC m=+0.145837448 container start b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:38 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [NOTICE]   (255411) : New worker (255414) forked
Nov 29 03:04:38 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [NOTICE]   (255411) : Loading success.
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.153 233728 DEBUG nova.network.neutron [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.166 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403478.1662219, d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.167 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.188 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Releasing lock "refresh_cache-d330098d-4454-4a83-8b6f-bc9828837e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.189 233728 DEBUG nova.compute.manager [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.196 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.199 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403478.1673691, d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.199 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.221 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.225 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.253 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:38 np0005539552 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 29 03:04:38 np0005539552 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000002e.scope: Consumed 14.662s CPU time.
Nov 29 03:04:38 np0005539552 systemd-machined[196379]: Machine qemu-15-instance-0000002e terminated.
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.311 233728 DEBUG nova.compute.manager [req-1864a663-20d3-42f8-9697-af77334d0841 req-e54efe4a-b83b-4833-82ad-507ffb3f614e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.311 233728 DEBUG oslo_concurrency.lockutils [req-1864a663-20d3-42f8-9697-af77334d0841 req-e54efe4a-b83b-4833-82ad-507ffb3f614e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.312 233728 DEBUG oslo_concurrency.lockutils [req-1864a663-20d3-42f8-9697-af77334d0841 req-e54efe4a-b83b-4833-82ad-507ffb3f614e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.312 233728 DEBUG oslo_concurrency.lockutils [req-1864a663-20d3-42f8-9697-af77334d0841 req-e54efe4a-b83b-4833-82ad-507ffb3f614e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.312 233728 DEBUG nova.compute.manager [req-1864a663-20d3-42f8-9697-af77334d0841 req-e54efe4a-b83b-4833-82ad-507ffb3f614e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Processing event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.312 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.315 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403478.3150969, d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.316 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.317 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.320 233728 INFO nova.virt.libvirt.driver [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance spawned successfully.#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.321 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.336 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.341 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.344 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.344 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.344 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.344 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.345 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.345 233728 DEBUG nova.virt.libvirt.driver [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.368 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.408 233728 INFO nova.virt.libvirt.driver [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance destroyed successfully.#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.408 233728 DEBUG nova.objects.instance [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'resources' on Instance uuid d330098d-4454-4a83-8b6f-bc9828837e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.411 233728 INFO nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Took 6.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.411 233728 DEBUG nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.495 233728 INFO nova.compute.manager [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Took 7.61 seconds to build instance.#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.515 233728 DEBUG oslo_concurrency.lockutils [None req-edacafd3-55bc-40d2-a8b1-0c32792e0cb4 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.824 233728 INFO nova.virt.libvirt.driver [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Deleting instance files /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48_del#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.825 233728 INFO nova.virt.libvirt.driver [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Deletion of /var/lib/nova/instances/d330098d-4454-4a83-8b6f-bc9828837e48_del complete#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.869 233728 INFO nova.compute.manager [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.870 233728 DEBUG oslo.service.loopingcall [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.870 233728 DEBUG nova.compute.manager [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:38 np0005539552 nova_compute[233724]: 2025-11-29 08:04:38.870 233728 DEBUG nova.network.neutron [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:04:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54506946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:04:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:04:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54506946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.215 233728 DEBUG nova.network.neutron [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.229 233728 DEBUG nova.network.neutron [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.245 233728 INFO nova.compute.manager [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Took 0.37 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.302 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.303 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.385 233728 DEBUG oslo_concurrency.processutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2537097644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.826 233728 DEBUG oslo_concurrency.processutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.832 233728 DEBUG nova.compute.provider_tree [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.895 233728 DEBUG nova.scheduler.client.report [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.921 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.946 233728 INFO nova.scheduler.client.report [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Deleted allocations for instance d330098d-4454-4a83-8b6f-bc9828837e48#033[00m
Nov 29 03:04:39 np0005539552 nova_compute[233724]: 2025-11-29 08:04:39.982 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.059 233728 DEBUG oslo_concurrency.lockutils [None req-ac455539-7665-4b9c-9746-726cb03edc6c 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "d330098d-4454-4a83-8b6f-bc9828837e48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.453 233728 DEBUG nova.compute.manager [req-00b410e9-c69b-4a72-81a7-0832a8df5a8b req-0fef024d-d311-4f70-ba36-e975ad554dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.455 233728 DEBUG oslo_concurrency.lockutils [req-00b410e9-c69b-4a72-81a7-0832a8df5a8b req-0fef024d-d311-4f70-ba36-e975ad554dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.455 233728 DEBUG oslo_concurrency.lockutils [req-00b410e9-c69b-4a72-81a7-0832a8df5a8b req-0fef024d-d311-4f70-ba36-e975ad554dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.455 233728 DEBUG oslo_concurrency.lockutils [req-00b410e9-c69b-4a72-81a7-0832a8df5a8b req-0fef024d-d311-4f70-ba36-e975ad554dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.456 233728 DEBUG nova.compute.manager [req-00b410e9-c69b-4a72-81a7-0832a8df5a8b req-0fef024d-d311-4f70-ba36-e975ad554dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] No waiting events found dispatching network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.456 233728 WARNING nova.compute.manager [req-00b410e9-c69b-4a72-81a7-0832a8df5a8b req-0fef024d-d311-4f70-ba36-e975ad554dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received unexpected event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:04:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.906 233728 DEBUG oslo_concurrency.lockutils [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.907 233728 DEBUG oslo_concurrency.lockutils [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.907 233728 DEBUG nova.compute.manager [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.912 233728 DEBUG nova.compute.manager [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.913 233728 DEBUG nova.objects.instance [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'flavor' on Instance uuid d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:40 np0005539552 nova_compute[233724]: 2025-11-29 08:04:40.939 233728 DEBUG nova.virt.libvirt.driver [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:04:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 03:04:41 np0005539552 nova_compute[233724]: 2025-11-29 08:04:41.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:43 np0005539552 podman[255525]: 2025-11-29 08:04:43.968502276 +0000 UTC m=+0.055073318 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:04:43 np0005539552 podman[255524]: 2025-11-29 08:04:43.976032389 +0000 UTC m=+0.063471394 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:04:44 np0005539552 podman[255526]: 2025-11-29 08:04:44.01125137 +0000 UTC m=+0.092812077 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:04:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:44.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:44 np0005539552 nova_compute[233724]: 2025-11-29 08:04:44.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 03:04:46 np0005539552 nova_compute[233724]: 2025-11-29 08:04:46.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:46.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:46.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 03:04:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:48.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:48.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:48 np0005539552 nova_compute[233724]: 2025-11-29 08:04:48.905 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403473.170011, 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:48 np0005539552 nova_compute[233724]: 2025-11-29 08:04:48.906 233728 INFO nova.compute.manager [-] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:48 np0005539552 nova_compute[233724]: 2025-11-29 08:04:48.927 233728 DEBUG nova.compute.manager [None req-62e9c60c-5302-461d-af55-dff0aac84a77 - - - - - -] [instance: 3d5147f1-bdb7-478e-a0f7-c211a3fb4bc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:49 np0005539552 nova_compute[233724]: 2025-11-29 08:04:49.463 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403474.4618587, c20f05a7-0dec-449a-92d5-a73494ab9b6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:49 np0005539552 nova_compute[233724]: 2025-11-29 08:04:49.464 233728 INFO nova.compute.manager [-] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:49 np0005539552 nova_compute[233724]: 2025-11-29 08:04:49.481 233728 DEBUG nova.compute.manager [None req-2c122884-034b-4568-9ef4-0decf56b08cd - - - - - -] [instance: c20f05a7-0dec-449a-92d5-a73494ab9b6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 03:04:49 np0005539552 nova_compute[233724]: 2025-11-29 08:04:49.985 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:50.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:50 np0005539552 nova_compute[233724]: 2025-11-29 08:04:50.980 233728 DEBUG nova.virt.libvirt.driver [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:04:51 np0005539552 nova_compute[233724]: 2025-11-29 08:04:51.468 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:51Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:78:dd 10.100.0.5
Nov 29 03:04:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:51Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:78:dd 10.100.0.5
Nov 29 03:04:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:52.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:52.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:53 np0005539552 nova_compute[233724]: 2025-11-29 08:04:53.407 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403478.4062078, d330098d-4454-4a83-8b6f-bc9828837e48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:53 np0005539552 nova_compute[233724]: 2025-11-29 08:04:53.408 233728 INFO nova.compute.manager [-] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:53 np0005539552 nova_compute[233724]: 2025-11-29 08:04:53.437 233728 DEBUG nova.compute.manager [None req-9a3c6697-79b1-4298-9d1d-cf363b87c303 - - - - - -] [instance: d330098d-4454-4a83-8b6f-bc9828837e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:53 np0005539552 nova_compute[233724]: 2025-11-29 08:04:53.813 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:53 np0005539552 nova_compute[233724]: 2025-11-29 08:04:53.833 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Triggering sync for uuid d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:04:53 np0005539552 nova_compute[233724]: 2025-11-29 08:04:53.833 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:54.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:54 np0005539552 nova_compute[233724]: 2025-11-29 08:04:54.987 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539552 kernel: tap0ed27b66-0e (unregistering): left promiscuous mode
Nov 29 03:04:55 np0005539552 NetworkManager[48926]: <info>  [1764403495.1870] device (tap0ed27b66-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:04:55 np0005539552 nova_compute[233724]: 2025-11-29 08:04:55.197 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:55Z|00163|binding|INFO|Releasing lport 0ed27b66-0e55-4592-aa78-847e3b01509f from this chassis (sb_readonly=0)
Nov 29 03:04:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:55Z|00164|binding|INFO|Setting lport 0ed27b66-0e55-4592-aa78-847e3b01509f down in Southbound
Nov 29 03:04:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:04:55Z|00165|binding|INFO|Removing iface tap0ed27b66-0e ovn-installed in OVS
Nov 29 03:04:55 np0005539552 nova_compute[233724]: 2025-11-29 08:04:55.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539552 nova_compute[233724]: 2025-11-29 08:04:55.220 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539552 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000036.scope: Deactivated successfully.
Nov 29 03:04:55 np0005539552 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000036.scope: Consumed 14.463s CPU time.
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.250 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:78:dd 10.100.0.5'], port_security=['fa:16:3e:53:78:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd9ae79a9-263e-4f0c-8538-a4f6b99a3cfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0ed27b66-0e55-4592-aa78-847e3b01509f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:55 np0005539552 systemd-machined[196379]: Machine qemu-18-instance-00000036 terminated.
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.251 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0ed27b66-0e55-4592-aa78-847e3b01509f in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 unbound from our chassis#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.252 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f01d29c1-afcb-4909-9abf-f7d31e4549d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.253 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[88d00798-6ebf-45a9-8506-2a8a236fd90f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.254 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace which is not needed anymore#033[00m
Nov 29 03:04:55 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [NOTICE]   (255411) : haproxy version is 2.8.14-c23fe91
Nov 29 03:04:55 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [NOTICE]   (255411) : path to executable is /usr/sbin/haproxy
Nov 29 03:04:55 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [WARNING]  (255411) : Exiting Master process...
Nov 29 03:04:55 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [ALERT]    (255411) : Current worker (255414) exited with code 143 (Terminated)
Nov 29 03:04:55 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[255387]: [WARNING]  (255411) : All workers exited. Exiting... (0)
Nov 29 03:04:55 np0005539552 systemd[1]: libpod-b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b.scope: Deactivated successfully.
Nov 29 03:04:55 np0005539552 podman[255667]: 2025-11-29 08:04:55.37308016 +0000 UTC m=+0.044391020 container died b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:04:55 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:04:55 np0005539552 systemd[1]: var-lib-containers-storage-overlay-3bddc65343fa29c4399beb0a27fa36178a12a0ba056b636168ce3f5c663375b2-merged.mount: Deactivated successfully.
Nov 29 03:04:55 np0005539552 podman[255667]: 2025-11-29 08:04:55.420811928 +0000 UTC m=+0.092122788 container cleanup b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:04:55 np0005539552 systemd[1]: libpod-conmon-b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b.scope: Deactivated successfully.
Nov 29 03:04:55 np0005539552 podman[255707]: 2025-11-29 08:04:55.488155446 +0000 UTC m=+0.046433565 container remove b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.494 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79ec922a-8241-4389-83f2-bfb6443fe36a]: (4, ('Sat Nov 29 08:04:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b)\nb6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b\nSat Nov 29 08:04:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (b6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b)\nb6af50b904010fe97ab0cbe841910ca71741caefb79231fb0843cdc8c24cc08b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.498 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[afb358f2-f8f2-4f85-9b8e-ac183dbb17f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.499 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:55 np0005539552 nova_compute[233724]: 2025-11-29 08:04:55.501 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539552 kernel: tapf01d29c1-a0: left promiscuous mode
Nov 29 03:04:55 np0005539552 nova_compute[233724]: 2025-11-29 08:04:55.521 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.523 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[edbff28f-9fed-4f73-91c2-b6a78db83707]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6d315a-1e86-4cf1-8505-68213435271d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.544 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0b339d24-f3f5-4143-87df-a083b1d70d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.559 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b7baf051-8599-4394-8bfb-d9f01805c1dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652307, 'reachable_time': 37394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255728, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.563 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:04:55 np0005539552 systemd[1]: run-netns-ovnmeta\x2df01d29c1\x2dafcb\x2d4909\x2d9abf\x2df7d31e4549d8.mount: Deactivated successfully.
Nov 29 03:04:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:55.563 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[475166c1-bc29-4462-ad4e-37858d2dcb67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.000 233728 INFO nova.virt.libvirt.driver [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance shutdown successfully after 15 seconds.#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.006 233728 INFO nova.virt.libvirt.driver [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance destroyed successfully.#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.006 233728 DEBUG nova.objects.instance [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'numa_topology' on Instance uuid d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.022 233728 DEBUG nova.compute.manager [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.085 233728 DEBUG oslo_concurrency.lockutils [None req-9f998d25-342f-4d35-a5c9-55fc091babbc fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 15.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.087 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.087 233728 INFO nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] During sync_power_state the instance has a pending task (powering-off). Skip.#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.088 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.342 233728 DEBUG nova.compute.manager [req-051c7f9b-d491-42f5-89fe-a74298af351c req-2c11dbe8-3caf-4dc3-a3a1-f98234828948 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received event network-vif-unplugged-0ed27b66-0e55-4592-aa78-847e3b01509f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.343 233728 DEBUG oslo_concurrency.lockutils [req-051c7f9b-d491-42f5-89fe-a74298af351c req-2c11dbe8-3caf-4dc3-a3a1-f98234828948 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.343 233728 DEBUG oslo_concurrency.lockutils [req-051c7f9b-d491-42f5-89fe-a74298af351c req-2c11dbe8-3caf-4dc3-a3a1-f98234828948 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.343 233728 DEBUG oslo_concurrency.lockutils [req-051c7f9b-d491-42f5-89fe-a74298af351c req-2c11dbe8-3caf-4dc3-a3a1-f98234828948 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.343 233728 DEBUG nova.compute.manager [req-051c7f9b-d491-42f5-89fe-a74298af351c req-2c11dbe8-3caf-4dc3-a3a1-f98234828948 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] No waiting events found dispatching network-vif-unplugged-0ed27b66-0e55-4592-aa78-847e3b01509f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.344 233728 WARNING nova.compute.manager [req-051c7f9b-d491-42f5-89fe-a74298af351c req-2c11dbe8-3caf-4dc3-a3a1-f98234828948 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received unexpected event network-vif-unplugged-0ed27b66-0e55-4592-aa78-847e3b01509f for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:04:56 np0005539552 nova_compute[233724]: 2025-11-29 08:04:56.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:56.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.529 233728 DEBUG nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.529 233728 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.530 233728 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.530 233728 DEBUG oslo_concurrency.lockutils [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.530 233728 DEBUG nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] No waiting events found dispatching network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.530 233728 WARNING nova.compute.manager [req-b124b584-66d7-4633-96d4-ec2d77dd2795 req-10471f4a-ff02-470a-896c-665e108da18c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received unexpected event network-vif-plugged-0ed27b66-0e55-4592-aa78-847e3b01509f for instance with vm_state stopped and task_state image_snapshot_pending.#033[00m
Nov 29 03:04:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:58.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.565 233728 DEBUG nova.compute.manager [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.620 233728 INFO nova.compute.manager [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] instance snapshotting#033[00m
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.620 233728 WARNING nova.compute.manager [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 29 03:04:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:04:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:58 np0005539552 nova_compute[233724]: 2025-11-29 08:04:58.936 233728 INFO nova.virt.libvirt.driver [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Beginning cold snapshot process#033[00m
Nov 29 03:04:59 np0005539552 nova_compute[233724]: 2025-11-29 08:04:59.098 233728 DEBUG nova.virt.libvirt.imagebackend [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:04:59 np0005539552 nova_compute[233724]: 2025-11-29 08:04:59.382 233728 DEBUG nova.storage.rbd_utils [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(41c3acd3fc4e4ed4896ad0af3de751b1) on rbd image(d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:59.762 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:59 np0005539552 nova_compute[233724]: 2025-11-29 08:04:59.762 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:04:59.763 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:04:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 03:04:59 np0005539552 nova_compute[233724]: 2025-11-29 08:04:59.942 233728 DEBUG nova.storage.rbd_utils [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] cloning vms/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk@41c3acd3fc4e4ed4896ad0af3de751b1 to images/fea86ab8-1607-4850-b354-0a134f3e6647 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:04:59 np0005539552 nova_compute[233724]: 2025-11-29 08:04:59.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:00 np0005539552 nova_compute[233724]: 2025-11-29 08:05:00.051 233728 DEBUG nova.storage.rbd_utils [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] flattening images/fea86ab8-1607-4850-b354-0a134f3e6647 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:00.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:00.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:00 np0005539552 nova_compute[233724]: 2025-11-29 08:05:00.803 233728 DEBUG nova.storage.rbd_utils [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] removing snapshot(41c3acd3fc4e4ed4896ad0af3de751b1) on rbd image(d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:05:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 03:05:00 np0005539552 nova_compute[233724]: 2025-11-29 08:05:00.946 233728 DEBUG nova.storage.rbd_utils [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(snap) on rbd image(fea86ab8-1607-4850-b354-0a134f3e6647) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:01 np0005539552 nova_compute[233724]: 2025-11-29 08:05:01.472 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 03:05:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:02.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:03 np0005539552 nova_compute[233724]: 2025-11-29 08:05:03.934 233728 INFO nova.virt.libvirt.driver [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Snapshot image upload complete#033[00m
Nov 29 03:05:03 np0005539552 nova_compute[233724]: 2025-11-29 08:05:03.934 233728 INFO nova.compute.manager [None req-f9554a4b-f671-45c0-8eeb-8fee7025d630 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Took 5.31 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:05:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:04.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:04 np0005539552 nova_compute[233724]: 2025-11-29 08:05:04.990 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.657 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.657 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.658 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.658 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.658 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.659 233728 INFO nova.compute.manager [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Terminating instance#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.660 233728 DEBUG nova.compute.manager [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.668 233728 INFO nova.virt.libvirt.driver [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Instance destroyed successfully.#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.668 233728 DEBUG nova.objects.instance [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'resources' on Instance uuid d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.691 233728 DEBUG nova.virt.libvirt.vif [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-803337407',display_name='tempest-ImagesTestJSON-server-803337407',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-803337407',id=54,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-ut2jfefv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:03Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.692 233728 DEBUG nova.network.os_vif_util [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "0ed27b66-0e55-4592-aa78-847e3b01509f", "address": "fa:16:3e:53:78:dd", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ed27b66-0e", "ovs_interfaceid": "0ed27b66-0e55-4592-aa78-847e3b01509f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.693 233728 DEBUG nova.network.os_vif_util [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.693 233728 DEBUG os_vif [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.694 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.695 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ed27b66-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.696 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.697 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.697 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539552 nova_compute[233724]: 2025-11-29 08:05:06.699 233728 INFO os_vif [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:78:dd,bridge_name='br-int',has_traffic_filtering=True,id=0ed27b66-0e55-4592-aa78-847e3b01509f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ed27b66-0e')#033[00m
Nov 29 03:05:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:06.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.115 233728 INFO nova.virt.libvirt.driver [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Deleting instance files /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_del#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.116 233728 INFO nova.virt.libvirt.driver [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Deletion of /var/lib/nova/instances/d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa_del complete#033[00m
Nov 29 03:05:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.177 233728 INFO nova.compute.manager [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.177 233728 DEBUG oslo.service.loopingcall [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.178 233728 DEBUG nova.compute.manager [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.178 233728 DEBUG nova.network.neutron [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.876 233728 DEBUG nova.network.neutron [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.894 233728 INFO nova.compute.manager [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Took 0.72 seconds to deallocate network for instance.#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.933 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.933 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:07 np0005539552 nova_compute[233724]: 2025-11-29 08:05:07.963 233728 DEBUG nova.compute.manager [req-c4e68f1a-695f-4549-810f-dc5fb92a0530 req-98bf7b98-8647-4408-8da2-36fd21fc8301 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Received event network-vif-deleted-0ed27b66-0e55-4592-aa78-847e3b01509f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.097 233728 DEBUG oslo_concurrency.processutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:05:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 6532 writes, 34K keys, 6532 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 6532 writes, 6532 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1581 writes, 7760 keys, 1581 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s#012Interval WAL: 1581 writes, 1581 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.8      2.45              0.12        18    0.136       0      0       0.0       0.0#012  L6      1/0   10.65 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2     65.8     55.9      3.06              0.45        17    0.180     93K   9092       0.0       0.0#012 Sum      1/0   10.65 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2     36.6     38.5      5.51              0.57        35    0.158     93K   9092       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   4.8    122.0    125.2      0.41              0.13         8    0.051     26K   2574       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     65.8     55.9      3.06              0.45        17    0.180     93K   9092       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.1      2.41              0.12        17    0.142       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.1 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.07 MB/s write, 0.20 GB read, 0.07 MB/s read, 5.5 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 20.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000428 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1169,20.15 MB,6.62926%) FilterBlock(35,263.61 KB,0.0846813%) IndexBlock(35,491.09 KB,0.157758%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:05:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2342960523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.563 233728 DEBUG oslo_concurrency.processutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.569 233728 DEBUG nova.compute.provider_tree [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.586 233728 DEBUG nova.scheduler.client.report [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.604 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.626 233728 INFO nova.scheduler.client.report [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Deleted allocations for instance d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa#033[00m
Nov 29 03:05:08 np0005539552 nova_compute[233724]: 2025-11-29 08:05:08.689 233728 DEBUG oslo_concurrency.lockutils [None req-38b065a5-4292-4c12-9f60-599d4110440b fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:08.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:08.766 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.053 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.053 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.076 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.156 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.157 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.164 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.164 233728 INFO nova.compute.claims [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.265 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4143420708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.715 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.722 233728 DEBUG nova.compute.provider_tree [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.745 233728 DEBUG nova.scheduler.client.report [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.800 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.801 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.860 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.861 233728 DEBUG nova.network.neutron [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.883 233728 INFO nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.901 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.943 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.944 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.989 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:09 np0005539552 nova_compute[233724]: 2025-11-29 08:05:09.992 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.053 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.054 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.055 233728 INFO nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Creating image(s)#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.075 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.101 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.127 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.130 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.157 233728 DEBUG nova.policy [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.193 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.194 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.203 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.204 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.204 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.205 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.234 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.238 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.262 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.262 233728 INFO nova.compute.claims [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:05:10 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.431 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403495.4301448, d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.432 233728 INFO nova.compute.manager [-] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.441 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.483 233728 DEBUG nova.compute.manager [None req-cb95c4d1-d3f7-4c67-828c-8d8f03a82d61 - - - - - -] [instance: d9ae79a9-263e-4f0c-8538-a4f6b99a3cfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.526 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:10.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.601 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.773 233728 DEBUG nova.objects.instance [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid afe3bf93-f54a-4a11-8f71-87b27ee7290a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.788 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.788 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Ensure instance console log exists: /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.789 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.789 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.790 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.869 233728 DEBUG nova.network.neutron [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Successfully created port: dc792eb5-e18c-4ca9-9940-31d2322e3c3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/749199951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.894 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.899 233728 DEBUG nova.compute.provider_tree [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.916 233728 DEBUG nova.scheduler.client.report [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.941 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:10 np0005539552 nova_compute[233724]: 2025-11-29 08:05:10.942 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.004 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.005 233728 DEBUG nova.network.neutron [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.024 233728 INFO nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.044 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.111309) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511111375, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 263, "total_data_size": 4612742, "memory_usage": 4670896, "flush_reason": "Manual Compaction"}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511126914, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2065724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32589, "largest_seqno": 34733, "table_properties": {"data_size": 2058157, "index_size": 4257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 19552, "raw_average_key_size": 22, "raw_value_size": 2041730, "raw_average_value_size": 2309, "num_data_blocks": 185, "num_entries": 884, "num_filter_entries": 884, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403375, "oldest_key_time": 1764403375, "file_creation_time": 1764403511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 15673 microseconds, and 5729 cpu microseconds.
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.126984) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2065724 bytes OK
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.127009) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.129875) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.129915) EVENT_LOG_v1 {"time_micros": 1764403511129885, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.129936) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4602836, prev total WAL file size 4602836, number of live WAL files 2.
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.131029) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(2017KB)], [63(10MB)]
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511131092, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 13228115, "oldest_snapshot_seqno": -1}
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.139 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.140 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.140 233728 INFO nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Creating image(s)#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.163 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.188 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.210 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.213 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6698 keys, 10317410 bytes, temperature: kUnknown
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511216060, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 10317410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10273785, "index_size": 25785, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 170847, "raw_average_key_size": 25, "raw_value_size": 10154564, "raw_average_value_size": 1516, "num_data_blocks": 1038, "num_entries": 6698, "num_filter_entries": 6698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.216282) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10317410 bytes
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.217605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 121.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 7170, records dropped: 472 output_compression: NoCompression
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.217639) EVENT_LOG_v1 {"time_micros": 1764403511217614, "job": 38, "event": "compaction_finished", "compaction_time_micros": 85054, "compaction_time_cpu_micros": 23782, "output_level": 6, "num_output_files": 1, "total_output_size": 10317410, "num_input_records": 7170, "num_output_records": 6698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511218153, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511219955, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.130958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.220054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.220059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.220060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.220062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:11.220063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.276 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.277 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.278 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.278 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.304 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.308 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.334 233728 DEBUG nova.policy [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fddc5f5801764ee19d5253e2cab34df3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '638fd52fccf14f16b56d0860553063f3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.590 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.659 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] resizing rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:11 np0005539552 nova_compute[233724]: 2025-11-29 08:05:11.754 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.145 233728 DEBUG nova.network.neutron [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Successfully created port: 116674a9-081c-4369-8936-549e5a952c2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.211 233728 DEBUG nova.network.neutron [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Successfully updated port: dc792eb5-e18c-4ca9-9940-31d2322e3c3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.232 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.233 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.233 233728 DEBUG nova.network.neutron [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.300 233728 DEBUG nova.compute.manager [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received event network-changed-dc792eb5-e18c-4ca9-9940-31d2322e3c3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.301 233728 DEBUG nova.compute.manager [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Refreshing instance network info cache due to event network-changed-dc792eb5-e18c-4ca9-9940-31d2322e3c3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.301 233728 DEBUG oslo_concurrency.lockutils [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.419 233728 DEBUG nova.network.neutron [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.427 233728 DEBUG nova.objects.instance [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'migration_context' on Instance uuid c99fb9ee-05eb-4a44-8d78-3c719ed060b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.440 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.440 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Ensure instance console log exists: /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.441 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.441 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:12 np0005539552 nova_compute[233724]: 2025-11-29 08:05:12.441 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:12.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.198 233728 DEBUG nova.network.neutron [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Successfully updated port: 116674a9-081c-4369-8936-549e5a952c2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.217 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "refresh_cache-c99fb9ee-05eb-4a44-8d78-3c719ed060b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.217 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquired lock "refresh_cache-c99fb9ee-05eb-4a44-8d78-3c719ed060b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.218 233728 DEBUG nova.network.neutron [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.414 233728 DEBUG nova.network.neutron [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.537 233728 DEBUG nova.network.neutron [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Updating instance_info_cache with network_info: [{"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.558 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.559 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance network_info: |[{"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.559 233728 DEBUG oslo_concurrency.lockutils [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.560 233728 DEBUG nova.network.neutron [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Refreshing network info cache for port dc792eb5-e18c-4ca9-9940-31d2322e3c3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.563 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Start _get_guest_xml network_info=[{"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.568 233728 WARNING nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.573 233728 DEBUG nova.virt.libvirt.host [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.573 233728 DEBUG nova.virt.libvirt.host [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.576 233728 DEBUG nova.virt.libvirt.host [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.576 233728 DEBUG nova.virt.libvirt.host [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.577 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.578 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.578 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.579 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.579 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.579 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.580 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.580 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.580 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.580 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.580 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.581 233728 DEBUG nova.virt.hardware [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:13 np0005539552 nova_compute[233724]: 2025-11-29 08:05:13.583 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/257717519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.007 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.033 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.037 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.396 233728 DEBUG nova.compute.manager [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received event network-changed-116674a9-081c-4369-8936-549e5a952c2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.397 233728 DEBUG nova.compute.manager [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Refreshing instance network info cache due to event network-changed-116674a9-081c-4369-8936-549e5a952c2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.397 233728 DEBUG oslo_concurrency.lockutils [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c99fb9ee-05eb-4a44-8d78-3c719ed060b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1743414772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.472 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.473 233728 DEBUG nova.virt.libvirt.vif [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1666023155',display_name='tempest-DeleteServersTestJSON-server-1666023155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1666023155',id=57,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-ttplysom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:09Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=afe3bf93-f54a-4a11-8f71-87b27ee7290a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.473 233728 DEBUG nova.network.os_vif_util [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.474 233728 DEBUG nova.network.os_vif_util [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.475 233728 DEBUG nova.objects.instance [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid afe3bf93-f54a-4a11-8f71-87b27ee7290a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.490 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <uuid>afe3bf93-f54a-4a11-8f71-87b27ee7290a</uuid>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <name>instance-00000039</name>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:name>tempest-DeleteServersTestJSON-server-1666023155</nova:name>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:05:13</nova:creationTime>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <nova:port uuid="dc792eb5-e18c-4ca9-9940-31d2322e3c3c">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <entry name="serial">afe3bf93-f54a-4a11-8f71-87b27ee7290a</entry>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <entry name="uuid">afe3bf93-f54a-4a11-8f71-87b27ee7290a</entry>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk.config">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:39:eb:35"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <target dev="tapdc792eb5-e1"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/console.log" append="off"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:05:14 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:05:14 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:05:14 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:05:14 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.491 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Preparing to wait for external event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.491 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.492 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.492 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.493 233728 DEBUG nova.virt.libvirt.vif [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1666023155',display_name='tempest-DeleteServersTestJSON-server-1666023155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1666023155',id=57,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-ttplysom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:09Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=afe3bf93-f54a-4a11-8f71-87b27ee7290a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.493 233728 DEBUG nova.network.os_vif_util [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.494 233728 DEBUG nova.network.os_vif_util [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.494 233728 DEBUG os_vif [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.494 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.495 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.495 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.498 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc792eb5-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.499 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc792eb5-e1, col_values=(('external_ids', {'iface-id': 'dc792eb5-e18c-4ca9-9940-31d2322e3c3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:eb:35', 'vm-uuid': 'afe3bf93-f54a-4a11-8f71-87b27ee7290a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539552 NetworkManager[48926]: <info>  [1764403514.5010] manager: (tapdc792eb5-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.502 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.507 233728 INFO os_vif [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1')#033[00m
Nov 29 03:05:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.565 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.565 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.565 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:39:eb:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.566 233728 INFO nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Using config drive#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.597 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:14 np0005539552 podman[256415]: 2025-11-29 08:05:14.603568129 +0000 UTC m=+0.058353997 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:05:14 np0005539552 podman[256414]: 2025-11-29 08:05:14.606546629 +0000 UTC m=+0.061260575 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:05:14 np0005539552 podman[256416]: 2025-11-29 08:05:14.682388386 +0000 UTC m=+0.137223815 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.688 233728 DEBUG nova.network.neutron [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Updating instance_info_cache with network_info: [{"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.704 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Releasing lock "refresh_cache-c99fb9ee-05eb-4a44-8d78-3c719ed060b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.704 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Instance network_info: |[{"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.705 233728 DEBUG oslo_concurrency.lockutils [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c99fb9ee-05eb-4a44-8d78-3c719ed060b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.705 233728 DEBUG nova.network.neutron [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Refreshing network info cache for port 116674a9-081c-4369-8936-549e5a952c2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.708 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Start _get_guest_xml network_info=[{"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.711 233728 WARNING nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.715 233728 DEBUG nova.virt.libvirt.host [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.715 233728 DEBUG nova.virt.libvirt.host [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.718 233728 DEBUG nova.virt.libvirt.host [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.718 233728 DEBUG nova.virt.libvirt.host [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.719 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.719 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.720 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.720 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.720 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.720 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.721 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.721 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.721 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.721 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.722 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.722 233728 DEBUG nova.virt.hardware [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.724 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:14.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:14 np0005539552 nova_compute[233724]: 2025-11-29 08:05:14.993 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/618170519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.135 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.159 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.163 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.188 233728 INFO nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Creating config drive at /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/disk.config#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.192 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d287vco execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.320 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_d287vco" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.354 233728 DEBUG nova.storage.rbd_utils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.359 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/disk.config afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.507 233728 DEBUG oslo_concurrency.processutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/disk.config afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.508 233728 INFO nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Deleting local config drive /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:15 np0005539552 kernel: tapdc792eb5-e1: entered promiscuous mode
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.5624] manager: (tapdc792eb5-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.563 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:15Z|00166|binding|INFO|Claiming lport dc792eb5-e18c-4ca9-9940-31d2322e3c3c for this chassis.
Nov 29 03:05:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:15Z|00167|binding|INFO|dc792eb5-e18c-4ca9-9940-31d2322e3c3c: Claiming fa:16:3e:39:eb:35 10.100.0.7
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.568 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.576 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:eb:35 10.100.0.7'], port_security=['fa:16:3e:39:eb:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'afe3bf93-f54a-4a11-8f71-87b27ee7290a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=dc792eb5-e18c-4ca9-9940-31d2322e3c3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.577 143400 INFO neutron.agent.ovn.metadata.agent [-] Port dc792eb5-e18c-4ca9-9940-31d2322e3c3c in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.578 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.588 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb26a75-1d70-4d16-92b3-33cdf8707dc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.589 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.591 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.591 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5b8079-076e-472b-8874-85e7ab0708a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.592 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[abbfa343-8e88-40b4-86ea-c98175541990]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:15 np0005539552 systemd-machined[196379]: New machine qemu-19-instance-00000039.
Nov 29 03:05:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1557691039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:15 np0005539552 systemd-udevd[256604]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.613 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[deaa4c23-6eb9-447d-b0e3-324819043dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.6183] device (tapdc792eb5-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.6197] device (tapdc792eb5-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.630 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539552 systemd[1]: Started Virtual Machine qemu-19-instance-00000039.
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.632 233728 DEBUG nova.virt.libvirt.vif [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-348635835',display_name='tempest-ImagesTestJSON-server-348635835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-348635835',id=58,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-c6tl35j1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:11Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=c99fb9ee-05eb-4a44-8d78-3c719ed060b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.632 233728 DEBUG nova.network.os_vif_util [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.633 233728 DEBUG nova.network.os_vif_util [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.636 233728 DEBUG nova.objects.instance [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c99fb9ee-05eb-4a44-8d78-3c719ed060b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.641 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d2236cdc-e017-486b-8bd3-eaa69357f121]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.648 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:15Z|00168|binding|INFO|Setting lport dc792eb5-e18c-4ca9-9940-31d2322e3c3c ovn-installed in OVS
Nov 29 03:05:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:15Z|00169|binding|INFO|Setting lport dc792eb5-e18c-4ca9-9940-31d2322e3c3c up in Southbound
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.652 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <uuid>c99fb9ee-05eb-4a44-8d78-3c719ed060b1</uuid>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <name>instance-0000003a</name>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:name>tempest-ImagesTestJSON-server-348635835</nova:name>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:05:14</nova:creationTime>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:user uuid="fddc5f5801764ee19d5253e2cab34df3">tempest-ImagesTestJSON-1682881466-project-member</nova:user>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:project uuid="638fd52fccf14f16b56d0860553063f3">tempest-ImagesTestJSON-1682881466</nova:project>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <nova:port uuid="116674a9-081c-4369-8936-549e5a952c2f">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <entry name="serial">c99fb9ee-05eb-4a44-8d78-3c719ed060b1</entry>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <entry name="uuid">c99fb9ee-05eb-4a44-8d78-3c719ed060b1</entry>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk.config">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:6b:50:03"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <target dev="tap116674a9-08"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/console.log" append="off"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:05:15 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:05:15 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:05:15 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:05:15 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.652 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Preparing to wait for external event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.652 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.653 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.653 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.654 233728 DEBUG nova.virt.libvirt.vif [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-348635835',display_name='tempest-ImagesTestJSON-server-348635835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-348635835',id=58,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-c6tl35j1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:11Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=c99fb9ee-05eb-4a44-8d78-3c719ed060b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.654 233728 DEBUG nova.network.os_vif_util [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.654 233728 DEBUG nova.network.os_vif_util [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.655 233728 DEBUG os_vif [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.655 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.656 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.656 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.656 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.658 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.658 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap116674a9-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.658 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap116674a9-08, col_values=(('external_ids', {'iface-id': '116674a9-081c-4369-8936-549e5a952c2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:50:03', 'vm-uuid': 'c99fb9ee-05eb-4a44-8d78-3c719ed060b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.6606] manager: (tap116674a9-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.662 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.667 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.668 233728 INFO os_vif [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08')#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.674 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fd86ba30-1657-46d0-a080-42da8acb08ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.680 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a42793-acd8-46a2-8f54-829a285ee6c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 systemd-udevd[256609]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.6817] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.701 233728 DEBUG nova.network.neutron [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Updated VIF entry in instance network info cache for port dc792eb5-e18c-4ca9-9940-31d2322e3c3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.702 233728 DEBUG nova.network.neutron [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Updating instance_info_cache with network_info: [{"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.710 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9a67c192-f67f-4f44-a8a0-54d6f282fe4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.713 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[78db0e93-0674-4a9a-b1b6-a54ab6011765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.734 233728 DEBUG oslo_concurrency.lockutils [req-9e82b41e-ee16-44ef-ab99-ca30d7424ea9 req-237c3df1-6797-49d9-87e2-34af9730d68f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.7359] device (tapa8be8715-20): carrier: link connected
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.740 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad51cdc-6a88-45bf-b765-d53a407ff4f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.755 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcb976b-65f0-4604-8a6f-02a0fceded40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656142, 'reachable_time': 43368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256642, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.762 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.762 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.762 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No VIF found with MAC fa:16:3e:6b:50:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.763 233728 INFO nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Using config drive#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.769 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ec185e64-9827-47a3-9c9f-5a6533415d4d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656142, 'tstamp': 656142}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256643, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.792 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[165a68cb-095a-4778-8aff-f5e2ad902382]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656142, 'reachable_time': 43368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256651, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.795 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.825 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6079e497-af78-4e6d-a675-2cb7fbc30070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.877 233728 DEBUG nova.compute.manager [req-b1b4924f-3708-4fd9-9ca3-4c0907c33dd4 req-85272487-7041-480a-94f8-5ac6bd649072 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.877 233728 DEBUG oslo_concurrency.lockutils [req-b1b4924f-3708-4fd9-9ca3-4c0907c33dd4 req-85272487-7041-480a-94f8-5ac6bd649072 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.877 233728 DEBUG oslo_concurrency.lockutils [req-b1b4924f-3708-4fd9-9ca3-4c0907c33dd4 req-85272487-7041-480a-94f8-5ac6bd649072 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.878 233728 DEBUG oslo_concurrency.lockutils [req-b1b4924f-3708-4fd9-9ca3-4c0907c33dd4 req-85272487-7041-480a-94f8-5ac6bd649072 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.878 233728 DEBUG nova.compute.manager [req-b1b4924f-3708-4fd9-9ca3-4c0907c33dd4 req-85272487-7041-480a-94f8-5ac6bd649072 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Processing event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.878 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2073da-ec7e-4778-bb48-78c210498271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.879 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.880 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.880 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.882 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 NetworkManager[48926]: <info>  [1764403515.8842] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 03:05:15 np0005539552 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.886 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.888 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.889 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:15Z|00170|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.908 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.911 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.912 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.913 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e953e0c2-6fe0-4884-a759-b5d11d0a2309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.914 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:15.916 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.958 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:05:15 np0005539552 nova_compute[233724]: 2025-11-29 08:05:15.958 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 03:05:16 np0005539552 podman[256760]: 2025-11-29 08:05:16.280654119 +0000 UTC m=+0.050374191 container create f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.301 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.302 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403516.3006625, afe3bf93-f54a-4a11-8f71-87b27ee7290a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.302 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.305 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.310 233728 INFO nova.virt.libvirt.driver [-] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance spawned successfully.#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.311 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.329 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.335 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:16 np0005539552 systemd[1]: Started libpod-conmon-f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19.scope.
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.340 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.340 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.340 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.341 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.341 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.341 233728 DEBUG nova.virt.libvirt.driver [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:16 np0005539552 podman[256760]: 2025-11-29 08:05:16.252057307 +0000 UTC m=+0.021777399 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:16 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:05:16 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac219394d3b9f6072becad2ae4d726d9477bbcc109faee1faa95bf66beb88bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:16 np0005539552 podman[256760]: 2025-11-29 08:05:16.386302651 +0000 UTC m=+0.156022753 container init f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:05:16 np0005539552 podman[256760]: 2025-11-29 08:05:16.393417723 +0000 UTC m=+0.163137795 container start f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.396 233728 INFO nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Creating config drive at /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/disk.config#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.402 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxbfr2nfk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:16 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [NOTICE]   (256780) : New worker (256783) forked
Nov 29 03:05:16 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [NOTICE]   (256780) : Loading success.
Nov 29 03:05:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3445115140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.429 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.429 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403516.3019598, afe3bf93-f54a-4a11-8f71-87b27ee7290a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.430 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.436 233728 INFO nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Took 6.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.437 233728 DEBUG nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.446 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.452 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.455 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403516.304659, afe3bf93-f54a-4a11-8f71-87b27ee7290a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.456 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.478 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.481 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.514 233728 INFO nova.compute.manager [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Took 7.38 seconds to build instance.#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.533 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.534 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.534 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxbfr2nfk" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.564 233728 DEBUG nova.storage.rbd_utils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.568 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/disk.config c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.593 233728 DEBUG oslo_concurrency.lockutils [None req-baf441b0-b869-44a0-9118-140e4939da4e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.598 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.598 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.731 233728 DEBUG oslo_concurrency.processutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/disk.config c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.731 233728 INFO nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Deleting local config drive /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:16.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:16 np0005539552 kernel: tap116674a9-08: entered promiscuous mode
Nov 29 03:05:16 np0005539552 NetworkManager[48926]: <info>  [1764403516.7729] manager: (tap116674a9-08): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 03:05:16 np0005539552 systemd-udevd[256630]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.775 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:16Z|00171|binding|INFO|Claiming lport 116674a9-081c-4369-8936-549e5a952c2f for this chassis.
Nov 29 03:05:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:16Z|00172|binding|INFO|116674a9-081c-4369-8936-549e5a952c2f: Claiming fa:16:3e:6b:50:03 10.100.0.14
Nov 29 03:05:16 np0005539552 NetworkManager[48926]: <info>  [1764403516.7885] device (tap116674a9-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:16 np0005539552 NetworkManager[48926]: <info>  [1764403516.7894] device (tap116674a9-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.787 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:50:03 10.100.0.14'], port_security=['fa:16:3e:6b:50:03 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c99fb9ee-05eb-4a44-8d78-3c719ed060b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=116674a9-081c-4369-8936-549e5a952c2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.792 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 116674a9-081c-4369-8936-549e5a952c2f in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 bound to our chassis#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.794 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f01d29c1-afcb-4909-9abf-f7d31e4549d8#033[00m
Nov 29 03:05:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:16Z|00173|binding|INFO|Setting lport 116674a9-081c-4369-8936-549e5a952c2f ovn-installed in OVS
Nov 29 03:05:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:16Z|00174|binding|INFO|Setting lport 116674a9-081c-4369-8936-549e5a952c2f up in Southbound
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.803 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.805 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[759f4884-be7f-4f7a-b3bd-aacbf3d01a5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.806 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf01d29c1-a1 in ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.808 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf01d29c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.808 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[70a330e0-78d5-4ab7-841e-8dba91f654a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.809 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0437fb-85f6-4e36-949d-4492e1eac9da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.814 233728 DEBUG nova.network.neutron [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Updated VIF entry in instance network info cache for port 116674a9-081c-4369-8936-549e5a952c2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.814 233728 DEBUG nova.network.neutron [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Updating instance_info_cache with network_info: [{"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:16 np0005539552 systemd-machined[196379]: New machine qemu-20-instance-0000003a.
Nov 29 03:05:16 np0005539552 systemd[1]: Started Virtual Machine qemu-20-instance-0000003a.
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.822 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b109edf2-71a4-4830-b341-bb964ada754b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.827 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.828 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4560MB free_disk=20.917888641357422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.829 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.830 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.832 233728 DEBUG oslo_concurrency.lockutils [req-bb9534f6-770d-44bc-bf64-4604216a4b56 req-9bb83fa2-72dc-48aa-ace3-1e3d6397060a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c99fb9ee-05eb-4a44-8d78-3c719ed060b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.846 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5be73705-556e-4769-9948-3e60e5cf3c68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.874 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f844f365-5942-4415-a1e4-ae336ff4c997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 NetworkManager[48926]: <info>  [1764403516.8825] manager: (tapf01d29c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.881 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec47fa9-fa64-45f9-8847-89ea4ebb0ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.915 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[81e7ce95-2f7c-46f7-baa4-bdd9f90de1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.917 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9da3da19-a578-42b5-8e03-934b185137bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 NetworkManager[48926]: <info>  [1764403516.9380] device (tapf01d29c1-a0): carrier: link connected
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.944 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1659b0b7-93b3-46ab-a6e9-6526926df8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.962 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8931e0-d35c-4768-b8cf-4c123e65332a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656262, 'reachable_time': 34189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256869, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.969 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance afe3bf93-f54a-4a11-8f71-87b27ee7290a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.969 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance c99fb9ee-05eb-4a44-8d78-3c719ed060b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.970 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:05:16 np0005539552 nova_compute[233724]: 2025-11-29 08:05:16.970 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.976 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8280860f-3fb6-4266-9304-d358f3e60c6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:77b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656262, 'tstamp': 656262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256870, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:16.990 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[62e5ad12-ab32-47a9-b3d5-1a0defb6e461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656262, 'reachable_time': 34189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256871, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.015 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c17565f-9d6f-4cab-87f6-891f1d6aeed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.027 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.086 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c3786759-7516-4623-bc4b-256dc2850841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.088 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.088 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.088 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01d29c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:17 np0005539552 kernel: tapf01d29c1-a0: entered promiscuous mode
Nov 29 03:05:17 np0005539552 NetworkManager[48926]: <info>  [1764403517.0925] manager: (tapf01d29c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.091 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.093 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf01d29c1-a0, col_values=(('external_ids', {'iface-id': '2247adf2-4048-41de-ba3c-ac69d728838f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:17Z|00175|binding|INFO|Releasing lport 2247adf2-4048-41de-ba3c-ac69d728838f from this chassis (sb_readonly=0)
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.112 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.113 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.113 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c27129-2376-460a-9faa-2cc4aa8f1f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.114 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:17.116 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'env', 'PROCESS_TAG=haproxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f01d29c1-afcb-4909-9abf-f7d31e4549d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.157 233728 DEBUG nova.compute.manager [req-b58777a7-6ed6-4414-ba79-76600b775ae8 req-64cbc40e-cca5-4c60-9489-25c7fa842a46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.158 233728 DEBUG oslo_concurrency.lockutils [req-b58777a7-6ed6-4414-ba79-76600b775ae8 req-64cbc40e-cca5-4c60-9489-25c7fa842a46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.159 233728 DEBUG oslo_concurrency.lockutils [req-b58777a7-6ed6-4414-ba79-76600b775ae8 req-64cbc40e-cca5-4c60-9489-25c7fa842a46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.159 233728 DEBUG oslo_concurrency.lockutils [req-b58777a7-6ed6-4414-ba79-76600b775ae8 req-64cbc40e-cca5-4c60-9489-25c7fa842a46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.159 233728 DEBUG nova.compute.manager [req-b58777a7-6ed6-4414-ba79-76600b775ae8 req-64cbc40e-cca5-4c60-9489-25c7fa842a46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Processing event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.468 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403517.467506, c99fb9ee-05eb-4a44-8d78-3c719ed060b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.468 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.470 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.474 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.477 233728 INFO nova.virt.libvirt.driver [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Instance spawned successfully.#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.478 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.485 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.490 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.499 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.499 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.500 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.500 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.501 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.501 233728 DEBUG nova.virt.libvirt.driver [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/222346444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.524 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.525 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403517.4679322, c99fb9ee-05eb-4a44-8d78-3c719ed060b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.525 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:17 np0005539552 podman[256962]: 2025-11-29 08:05:17.530891298 +0000 UTC m=+0.079754713 container create 4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.537 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.545 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.548 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.551 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403517.4735363, c99fb9ee-05eb-4a44-8d78-3c719ed060b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.551 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.562 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.568 233728 INFO nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Took 6.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.569 233728 DEBUG nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:17 np0005539552 systemd[1]: Started libpod-conmon-4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb.scope.
Nov 29 03:05:17 np0005539552 podman[256962]: 2025-11-29 08:05:17.483752546 +0000 UTC m=+0.032615991 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.577 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.580 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:17 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.604 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.605 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:17 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708eda57ab79e0156281b28ec17c183d8cfcb9943fbe39baa2cddab381a52641/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.620 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:17 np0005539552 podman[256962]: 2025-11-29 08:05:17.623844408 +0000 UTC m=+0.172707843 container init 4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:05:17 np0005539552 podman[256962]: 2025-11-29 08:05:17.629194762 +0000 UTC m=+0.178058177 container start 4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:05:17 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [NOTICE]   (256982) : New worker (256984) forked
Nov 29 03:05:17 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [NOTICE]   (256982) : Loading success.
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.661 233728 INFO nova.compute.manager [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Took 7.50 seconds to build instance.#033[00m
Nov 29 03:05:17 np0005539552 nova_compute[233724]: 2025-11-29 08:05:17.690 233728 DEBUG oslo_concurrency.lockutils [None req-8b367104-381e-4cd6-8074-1077e01eb26f fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.298 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.299 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.299 233728 INFO nova.compute.manager [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Shelving#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.324 233728 DEBUG nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.367 233728 DEBUG nova.compute.manager [req-077b72a2-2c23-49fe-966e-f422b9180c67 req-5054ab13-bf65-4508-b081-9498a3d20f01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.367 233728 DEBUG oslo_concurrency.lockutils [req-077b72a2-2c23-49fe-966e-f422b9180c67 req-5054ab13-bf65-4508-b081-9498a3d20f01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.367 233728 DEBUG oslo_concurrency.lockutils [req-077b72a2-2c23-49fe-966e-f422b9180c67 req-5054ab13-bf65-4508-b081-9498a3d20f01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.368 233728 DEBUG oslo_concurrency.lockutils [req-077b72a2-2c23-49fe-966e-f422b9180c67 req-5054ab13-bf65-4508-b081-9498a3d20f01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.368 233728 DEBUG nova.compute.manager [req-077b72a2-2c23-49fe-966e-f422b9180c67 req-5054ab13-bf65-4508-b081-9498a3d20f01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] No waiting events found dispatching network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:18 np0005539552 nova_compute[233724]: 2025-11-29 08:05:18.368 233728 WARNING nova.compute.manager [req-077b72a2-2c23-49fe-966e-f422b9180c67 req-5054ab13-bf65-4508-b081-9498a3d20f01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received unexpected event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:05:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.047 233728 DEBUG nova.objects.instance [None req-8ddc440e-1a45-4743-8c9c-148449b29351 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c99fb9ee-05eb-4a44-8d78-3c719ed060b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.065 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403519.0648963, c99fb9ee-05eb-4a44-8d78-3c719ed060b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.065 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.085 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.090 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.108 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.246 233728 DEBUG nova.compute.manager [req-690f25e5-54b7-42dd-8d24-23c8f95761e7 req-bd62da10-66c5-42d2-b4f0-f237cda0175d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.247 233728 DEBUG oslo_concurrency.lockutils [req-690f25e5-54b7-42dd-8d24-23c8f95761e7 req-bd62da10-66c5-42d2-b4f0-f237cda0175d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.247 233728 DEBUG oslo_concurrency.lockutils [req-690f25e5-54b7-42dd-8d24-23c8f95761e7 req-bd62da10-66c5-42d2-b4f0-f237cda0175d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.247 233728 DEBUG oslo_concurrency.lockutils [req-690f25e5-54b7-42dd-8d24-23c8f95761e7 req-bd62da10-66c5-42d2-b4f0-f237cda0175d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.247 233728 DEBUG nova.compute.manager [req-690f25e5-54b7-42dd-8d24-23c8f95761e7 req-bd62da10-66c5-42d2-b4f0-f237cda0175d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] No waiting events found dispatching network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.248 233728 WARNING nova.compute.manager [req-690f25e5-54b7-42dd-8d24-23c8f95761e7 req-bd62da10-66c5-42d2-b4f0-f237cda0175d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received unexpected event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f for instance with vm_state active and task_state suspending.#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.607 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.607 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.608 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.608 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:05:19 np0005539552 kernel: tap116674a9-08 (unregistering): left promiscuous mode
Nov 29 03:05:19 np0005539552 NetworkManager[48926]: <info>  [1764403519.6355] device (tap116674a9-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:19Z|00176|binding|INFO|Releasing lport 116674a9-081c-4369-8936-549e5a952c2f from this chassis (sb_readonly=0)
Nov 29 03:05:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:19Z|00177|binding|INFO|Setting lport 116674a9-081c-4369-8936-549e5a952c2f down in Southbound
Nov 29 03:05:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:19Z|00178|binding|INFO|Removing iface tap116674a9-08 ovn-installed in OVS
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.655 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.660 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:50:03 10.100.0.14'], port_security=['fa:16:3e:6b:50:03 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c99fb9ee-05eb-4a44-8d78-3c719ed060b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=116674a9-081c-4369-8936-549e5a952c2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.662 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 116674a9-081c-4369-8936-549e5a952c2f in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 unbound from our chassis#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.663 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f01d29c1-afcb-4909-9abf-f7d31e4549d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.663 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[30ced0e5-897c-4466-83df-55f62bb22b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.664 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace which is not needed anymore#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539552 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 29 03:05:19 np0005539552 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000003a.scope: Consumed 2.380s CPU time.
Nov 29 03:05:19 np0005539552 systemd-machined[196379]: Machine qemu-20-instance-0000003a terminated.
Nov 29 03:05:19 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [NOTICE]   (256982) : haproxy version is 2.8.14-c23fe91
Nov 29 03:05:19 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [NOTICE]   (256982) : path to executable is /usr/sbin/haproxy
Nov 29 03:05:19 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [WARNING]  (256982) : Exiting Master process...
Nov 29 03:05:19 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [WARNING]  (256982) : Exiting Master process...
Nov 29 03:05:19 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [ALERT]    (256982) : Current worker (256984) exited with code 143 (Terminated)
Nov 29 03:05:19 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[256978]: [WARNING]  (256982) : All workers exited. Exiting... (0)
Nov 29 03:05:19 np0005539552 systemd[1]: libpod-4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb.scope: Deactivated successfully.
Nov 29 03:05:19 np0005539552 podman[257021]: 2025-11-29 08:05:19.787236596 +0000 UTC m=+0.041844511 container died 4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:05:19 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb-userdata-shm.mount: Deactivated successfully.
Nov 29 03:05:19 np0005539552 systemd[1]: var-lib-containers-storage-overlay-708eda57ab79e0156281b28ec17c183d8cfcb9943fbe39baa2cddab381a52641-merged.mount: Deactivated successfully.
Nov 29 03:05:19 np0005539552 podman[257021]: 2025-11-29 08:05:19.824238405 +0000 UTC m=+0.078846310 container cleanup 4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:05:19 np0005539552 systemd[1]: libpod-conmon-4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb.scope: Deactivated successfully.
Nov 29 03:05:19 np0005539552 podman[257050]: 2025-11-29 08:05:19.882882928 +0000 UTC m=+0.038710786 container remove 4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.892 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4005f904-c754-4089-8276-138c2c3b0e5a]: (4, ('Sat Nov 29 08:05:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb)\n4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb\nSat Nov 29 08:05:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb)\n4cb45eac64ffd970a314a5944abe5dbf2c83b3eb2cca3e5d71359e773b39ffbb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.894 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f259c81-47b1-45c2-b47d-452df34f2df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.895 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.896 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539552 kernel: tapf01d29c1-a0: left promiscuous mode
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.915 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.920 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.922 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[15acff98-6ed3-47a3-8325-d674b99c8d7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.931 233728 DEBUG nova.compute.manager [None req-8ddc440e-1a45-4743-8c9c-148449b29351 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.940 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb41e6c-3bd4-46a8-ab5e-d8c3c6036ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.941 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0768392c-7c7b-41a9-9d44-2071b42a6d46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 systemd[1]: run-netns-ovnmeta\x2df01d29c1\x2dafcb\x2d4909\x2d9abf\x2df7d31e4549d8.mount: Deactivated successfully.
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.957 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1b9520-0196-42ab-a7a5-2a7fdcf84892]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656255, 'reachable_time': 20293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257077, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.962 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:05:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:19.963 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[7e06a94e-a485-4ef8-ac36-b7d6b68cd119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:19 np0005539552 nova_compute[233724]: 2025-11-29 08:05:19.994 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:20.614 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:20.614 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:20.615 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:20 np0005539552 nova_compute[233724]: 2025-11-29 08:05:20.661 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:20.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:20 np0005539552 nova_compute[233724]: 2025-11-29 08:05:20.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:20 np0005539552 nova_compute[233724]: 2025-11-29 08:05:20.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:20 np0005539552 nova_compute[233724]: 2025-11-29 08:05:20.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:05:20 np0005539552 nova_compute[233724]: 2025-11-29 08:05:20.975 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:05:20 np0005539552 nova_compute[233724]: 2025-11-29 08:05:20.977 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.374 233728 DEBUG nova.compute.manager [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received event network-vif-unplugged-116674a9-081c-4369-8936-549e5a952c2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.375 233728 DEBUG oslo_concurrency.lockutils [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.375 233728 DEBUG oslo_concurrency.lockutils [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.376 233728 DEBUG oslo_concurrency.lockutils [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.376 233728 DEBUG nova.compute.manager [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] No waiting events found dispatching network-vif-unplugged-116674a9-081c-4369-8936-549e5a952c2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.376 233728 WARNING nova.compute.manager [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received unexpected event network-vif-unplugged-116674a9-081c-4369-8936-549e5a952c2f for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.377 233728 DEBUG nova.compute.manager [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.377 233728 DEBUG oslo_concurrency.lockutils [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.378 233728 DEBUG oslo_concurrency.lockutils [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.378 233728 DEBUG oslo_concurrency.lockutils [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.378 233728 DEBUG nova.compute.manager [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] No waiting events found dispatching network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.378 233728 WARNING nova.compute.manager [req-cf6a22f0-8ce8-4761-b8bf-a3cf8dcd0d2e req-fc159777-e44e-4989-a0ce-86e0db240923 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received unexpected event network-vif-plugged-116674a9-081c-4369-8936-549e5a952c2f for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.607 233728 DEBUG nova.compute.manager [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.659 233728 INFO nova.compute.manager [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] instance snapshotting#033[00m
Nov 29 03:05:21 np0005539552 nova_compute[233724]: 2025-11-29 08:05:21.660 233728 WARNING nova.compute.manager [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 29 03:05:22 np0005539552 nova_compute[233724]: 2025-11-29 08:05:22.030 233728 INFO nova.virt.libvirt.driver [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Beginning cold snapshot process#033[00m
Nov 29 03:05:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:22 np0005539552 nova_compute[233724]: 2025-11-29 08:05:22.182 233728 DEBUG nova.virt.libvirt.imagebackend [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:05:22 np0005539552 nova_compute[233724]: 2025-11-29 08:05:22.446 233728 DEBUG nova.storage.rbd_utils [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(60c9bf9969484471b7ed0b6d4455bfb4) on rbd image(c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 03:05:23 np0005539552 nova_compute[233724]: 2025-11-29 08:05:23.389 233728 DEBUG nova.storage.rbd_utils [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] cloning vms/c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk@60c9bf9969484471b7ed0b6d4455bfb4 to images/0bdf7515-c07e-4918-9c17-60eba424f483 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:05:23 np0005539552 nova_compute[233724]: 2025-11-29 08:05:23.753 233728 DEBUG nova.storage.rbd_utils [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] flattening images/0bdf7515-c07e-4918-9c17-60eba424f483 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:24.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 03:05:24 np0005539552 nova_compute[233724]: 2025-11-29 08:05:24.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:25 np0005539552 nova_compute[233724]: 2025-11-29 08:05:25.027 233728 DEBUG nova.storage.rbd_utils [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] removing snapshot(60c9bf9969484471b7ed0b6d4455bfb4) on rbd image(c99fb9ee-05eb-4a44-8d78-3c719ed060b1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:05:25 np0005539552 nova_compute[233724]: 2025-11-29 08:05:25.694 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 03:05:26 np0005539552 nova_compute[233724]: 2025-11-29 08:05:26.057 233728 DEBUG nova.storage.rbd_utils [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(snap) on rbd image(0bdf7515-c07e-4918-9c17-60eba424f483) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:26.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 03:05:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:28 np0005539552 nova_compute[233724]: 2025-11-29 08:05:28.374 233728 DEBUG nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:05:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:28 np0005539552 nova_compute[233724]: 2025-11-29 08:05:28.876 233728 INFO nova.virt.libvirt.driver [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Snapshot image upload complete#033[00m
Nov 29 03:05:28 np0005539552 nova_compute[233724]: 2025-11-29 08:05:28.877 233728 INFO nova.compute.manager [None req-7bc4ae29-770f-44dd-aad8-60df132e0626 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Took 7.22 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:05:29 np0005539552 nova_compute[233724]: 2025-11-29 08:05:29.998 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 03:05:30 np0005539552 nova_compute[233724]: 2025-11-29 08:05:30.697 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 03:05:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:31Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:eb:35 10.100.0.7
Nov 29 03:05:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:31Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:eb:35 10.100.0.7
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.876 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.877 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.878 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.878 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.878 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.879 233728 INFO nova.compute.manager [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Terminating instance#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.880 233728 DEBUG nova.compute.manager [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.886 233728 INFO nova.virt.libvirt.driver [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Instance destroyed successfully.#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.886 233728 DEBUG nova.objects.instance [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'resources' on Instance uuid c99fb9ee-05eb-4a44-8d78-3c719ed060b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.899 233728 DEBUG nova.virt.libvirt.vif [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-348635835',display_name='tempest-ImagesTestJSON-server-348635835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-348635835',id=58,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-c6tl35j1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:28Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=c99fb9ee-05eb-4a44-8d78-3c719ed060b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.899 233728 DEBUG nova.network.os_vif_util [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "116674a9-081c-4369-8936-549e5a952c2f", "address": "fa:16:3e:6b:50:03", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap116674a9-08", "ovs_interfaceid": "116674a9-081c-4369-8936-549e5a952c2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.900 233728 DEBUG nova.network.os_vif_util [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.900 233728 DEBUG os_vif [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.902 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.902 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap116674a9-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.950 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.952 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:31 np0005539552 nova_compute[233724]: 2025-11-29 08:05:31.954 233728 INFO os_vif [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:03,bridge_name='br-int',has_traffic_filtering=True,id=116674a9-081c-4369-8936-549e5a952c2f,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap116674a9-08')#033[00m
Nov 29 03:05:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:33 np0005539552 nova_compute[233724]: 2025-11-29 08:05:33.337 233728 INFO nova.virt.libvirt.driver [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Deleting instance files /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1_del#033[00m
Nov 29 03:05:33 np0005539552 nova_compute[233724]: 2025-11-29 08:05:33.338 233728 INFO nova.virt.libvirt.driver [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Deletion of /var/lib/nova/instances/c99fb9ee-05eb-4a44-8d78-3c719ed060b1_del complete#033[00m
Nov 29 03:05:33 np0005539552 nova_compute[233724]: 2025-11-29 08:05:33.407 233728 INFO nova.compute.manager [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Took 1.53 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:05:33 np0005539552 nova_compute[233724]: 2025-11-29 08:05:33.407 233728 DEBUG oslo.service.loopingcall [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:05:33 np0005539552 nova_compute[233724]: 2025-11-29 08:05:33.407 233728 DEBUG nova.compute.manager [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:05:33 np0005539552 nova_compute[233724]: 2025-11-29 08:05:33.407 233728 DEBUG nova.network.neutron [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:05:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:34.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.585 233728 DEBUG nova.network.neutron [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.606 233728 INFO nova.compute.manager [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Took 1.20 seconds to deallocate network for instance.#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.656 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.657 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.684 233728 DEBUG nova.compute.manager [req-63777ac7-3199-4f85-aa6a-2edc0c9de1ed req-04225243-3b06-434b-a7a0-360b05cf2118 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Received event network-vif-deleted-116674a9-081c-4369-8936-549e5a952c2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.687 233728 DEBUG nova.scheduler.client.report [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.700 233728 DEBUG nova.scheduler.client.report [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.701 233728 DEBUG nova.compute.provider_tree [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.718 233728 DEBUG nova.scheduler.client.report [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.754 233728 DEBUG nova.scheduler.client.report [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:05:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:34.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.815 233728 DEBUG oslo_concurrency.processutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.932 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403519.9314582, c99fb9ee-05eb-4a44-8d78-3c719ed060b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.934 233728 INFO nova.compute.manager [-] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:34 np0005539552 nova_compute[233724]: 2025-11-29 08:05:34.961 233728 DEBUG nova.compute.manager [None req-2746ebe6-1b76-4f6c-ad63-f4615e6c9e5c - - - - - -] [instance: c99fb9ee-05eb-4a44-8d78-3c719ed060b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.000 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/762136284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.264 233728 DEBUG oslo_concurrency.processutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.269 233728 DEBUG nova.compute.provider_tree [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.282 233728 DEBUG nova.scheduler.client.report [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.306 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.331 233728 INFO nova.scheduler.client.report [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Deleted allocations for instance c99fb9ee-05eb-4a44-8d78-3c719ed060b1#033[00m
Nov 29 03:05:35 np0005539552 nova_compute[233724]: 2025-11-29 08:05:35.430 233728 DEBUG oslo_concurrency.lockutils [None req-dea4403e-7922-4480-a5f5-c0a15c807720 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "c99fb9ee-05eb-4a44-8d78-3c719ed060b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 03:05:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:36 np0005539552 nova_compute[233724]: 2025-11-29 08:05:36.950 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:38.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:38.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:05:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1826386267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:05:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:05:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1826386267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:05:39 np0005539552 nova_compute[233724]: 2025-11-29 08:05:39.425 233728 DEBUG nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:05:40 np0005539552 nova_compute[233724]: 2025-11-29 08:05:40.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:40.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:40.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 03:05:41 np0005539552 nova_compute[233724]: 2025-11-29 08:05:41.953 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.980513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541980578, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1014, "num_deletes": 512, "total_data_size": 1136220, "memory_usage": 1159152, "flush_reason": "Manual Compaction"}
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541988869, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 745489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34738, "largest_seqno": 35747, "table_properties": {"data_size": 741098, "index_size": 1531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13227, "raw_average_key_size": 18, "raw_value_size": 730084, "raw_average_value_size": 1038, "num_data_blocks": 67, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403511, "oldest_key_time": 1764403511, "file_creation_time": 1764403541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8417 microseconds, and 3511 cpu microseconds.
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.988940) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 745489 bytes OK
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.988962) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.991342) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.991362) EVENT_LOG_v1 {"time_micros": 1764403541991356, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.991381) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1130062, prev total WAL file size 1130062, number of live WAL files 2.
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.991999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(728KB)], [66(10075KB)]
Nov 29 03:05:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541992048, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 11062899, "oldest_snapshot_seqno": -1}
Nov 29 03:05:42 np0005539552 kernel: tapdc792eb5-e1 (unregistering): left promiscuous mode
Nov 29 03:05:42 np0005539552 NetworkManager[48926]: <info>  [1764403542.0133] device (tapdc792eb5-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.022 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:42Z|00179|binding|INFO|Releasing lport dc792eb5-e18c-4ca9-9940-31d2322e3c3c from this chassis (sb_readonly=0)
Nov 29 03:05:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:42Z|00180|binding|INFO|Setting lport dc792eb5-e18c-4ca9-9940-31d2322e3c3c down in Southbound
Nov 29 03:05:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:05:42Z|00181|binding|INFO|Removing iface tapdc792eb5-e1 ovn-installed in OVS
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.024 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.044 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6363 keys, 8909003 bytes, temperature: kUnknown
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542053567, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8909003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8868030, "index_size": 23967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 165929, "raw_average_key_size": 26, "raw_value_size": 8754943, "raw_average_value_size": 1375, "num_data_blocks": 950, "num_entries": 6363, "num_filter_entries": 6363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.054 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:eb:35 10.100.0.7'], port_security=['fa:16:3e:39:eb:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'afe3bf93-f54a-4a11-8f71-87b27ee7290a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=dc792eb5-e18c-4ca9-9940-31d2322e3c3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.053796) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8909003 bytes
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.055118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.6 rd, 144.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(26.8) write-amplify(12.0) OK, records in: 7401, records dropped: 1038 output_compression: NoCompression
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.055140) EVENT_LOG_v1 {"time_micros": 1764403542055130, "job": 40, "event": "compaction_finished", "compaction_time_micros": 61595, "compaction_time_cpu_micros": 20047, "output_level": 6, "num_output_files": 1, "total_output_size": 8909003, "num_input_records": 7401, "num_output_records": 6363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542055383, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.055 143400 INFO neutron.agent.ovn.metadata.agent [-] Port dc792eb5-e18c-4ca9-9940-31d2322e3c3c in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.056 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542057509, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:41.991907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.057663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.057670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.057673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.057675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:05:42.057677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.057 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[85696563-7d0d-441c-bb9f-dbbad60fbff9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.057 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:05:42 np0005539552 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 29 03:05:42 np0005539552 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000039.scope: Consumed 14.517s CPU time.
Nov 29 03:05:42 np0005539552 systemd-machined[196379]: Machine qemu-19-instance-00000039 terminated.
Nov 29 03:05:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:42 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [NOTICE]   (256780) : haproxy version is 2.8.14-c23fe91
Nov 29 03:05:42 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [NOTICE]   (256780) : path to executable is /usr/sbin/haproxy
Nov 29 03:05:42 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [WARNING]  (256780) : Exiting Master process...
Nov 29 03:05:42 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [ALERT]    (256780) : Current worker (256783) exited with code 143 (Terminated)
Nov 29 03:05:42 np0005539552 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[256776]: [WARNING]  (256780) : All workers exited. Exiting... (0)
Nov 29 03:05:42 np0005539552 systemd[1]: libpod-f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19.scope: Deactivated successfully.
Nov 29 03:05:42 np0005539552 podman[257481]: 2025-11-29 08:05:42.188600917 +0000 UTC m=+0.045402327 container died f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:05:42 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19-userdata-shm.mount: Deactivated successfully.
Nov 29 03:05:42 np0005539552 systemd[1]: var-lib-containers-storage-overlay-9ac219394d3b9f6072becad2ae4d726d9477bbcc109faee1faa95bf66beb88bf-merged.mount: Deactivated successfully.
Nov 29 03:05:42 np0005539552 podman[257481]: 2025-11-29 08:05:42.233945171 +0000 UTC m=+0.090746571 container cleanup f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.243 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 systemd[1]: libpod-conmon-f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19.scope: Deactivated successfully.
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.250 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 podman[257518]: 2025-11-29 08:05:42.314635469 +0000 UTC m=+0.055801328 container remove f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.322 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c332a61a-143b-43c8-91bb-31ef945c68ef]: (4, ('Sat Nov 29 08:05:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19)\nf03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19\nSat Nov 29 08:05:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (f03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19)\nf03f67d1a43cf98edb99f235d084d408c2cb2782a61d8fbb0d0dba71561e3d19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.325 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f38e56f3-9dd2-4dc8-ab5b-aa4f83db20eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.325 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.368 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.388 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.394 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed9b68a-e239-49b4-9e94-3506e02594a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.411 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6e41f850-e0ef-4368-91a3-9c9ff8345376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.413 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0f0244-7d2c-449f-823e-fd5ff9d09c01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.431 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b344db85-da3a-4f08-b1f3-bedaa1d54f33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656135, 'reachable_time': 37386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257540, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.434 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:05:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:05:42.435 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[47eed610-6621-4210-9017-6924db9615ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:42 np0005539552 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.437 233728 INFO nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance shutdown successfully after 24 seconds.#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.442 233728 INFO nova.virt.libvirt.driver [-] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance destroyed successfully.#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.442 233728 DEBUG nova.objects.instance [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'numa_topology' on Instance uuid afe3bf93-f54a-4a11-8f71-87b27ee7290a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.468 233728 DEBUG nova.compute.manager [req-209abd98-bec2-404d-836c-a44334d5632d req-e4c147f8-51d1-413a-a85e-06835ba20009 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received event network-vif-unplugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.468 233728 DEBUG oslo_concurrency.lockutils [req-209abd98-bec2-404d-836c-a44334d5632d req-e4c147f8-51d1-413a-a85e-06835ba20009 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.468 233728 DEBUG oslo_concurrency.lockutils [req-209abd98-bec2-404d-836c-a44334d5632d req-e4c147f8-51d1-413a-a85e-06835ba20009 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.468 233728 DEBUG oslo_concurrency.lockutils [req-209abd98-bec2-404d-836c-a44334d5632d req-e4c147f8-51d1-413a-a85e-06835ba20009 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.469 233728 DEBUG nova.compute.manager [req-209abd98-bec2-404d-836c-a44334d5632d req-e4c147f8-51d1-413a-a85e-06835ba20009 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] No waiting events found dispatching network-vif-unplugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.469 233728 WARNING nova.compute.manager [req-209abd98-bec2-404d-836c-a44334d5632d req-e4c147f8-51d1-413a-a85e-06835ba20009 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received unexpected event network-vif-unplugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:05:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:42.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.768 233728 INFO nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Beginning cold snapshot process#033[00m
Nov 29 03:05:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:42.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:42 np0005539552 nova_compute[233724]: 2025-11-29 08:05:42.961 233728 DEBUG nova.virt.libvirt.imagebackend [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:05:43 np0005539552 nova_compute[233724]: 2025-11-29 08:05:43.235 233728 DEBUG nova.storage.rbd_utils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] creating snapshot(6106bd308b6f4de7b1d8ff1256ac70ef) on rbd image(afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 03:05:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:44.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:44.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:44 np0005539552 podman[257594]: 2025-11-29 08:05:44.978385094 +0000 UTC m=+0.064585184 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:44 np0005539552 podman[257596]: 2025-11-29 08:05:44.995168357 +0000 UTC m=+0.072475077 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:05:45 np0005539552 podman[257595]: 2025-11-29 08:05:45.001245611 +0000 UTC m=+0.074352698 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.003 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.011 233728 DEBUG nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.011 233728 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.011 233728 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.011 233728 DEBUG oslo_concurrency.lockutils [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.011 233728 DEBUG nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] No waiting events found dispatching network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:45 np0005539552 nova_compute[233724]: 2025-11-29 08:05:45.012 233728 WARNING nova.compute.manager [req-3ed85718-e194-4b39-b2d2-3372089f3cc9 req-ded39af6-2f12-41b4-8a3c-fd83574c3685 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received unexpected event network-vif-plugged-dc792eb5-e18c-4ca9-9940-31d2322e3c3c for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:05:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 03:05:46 np0005539552 nova_compute[233724]: 2025-11-29 08:05:46.414 233728 DEBUG nova.storage.rbd_utils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] cloning vms/afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk@6106bd308b6f4de7b1d8ff1256ac70ef to images/36c33ac0-d290-4825-8d63-52b362736e81 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:05:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:46.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:46.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:46 np0005539552 nova_compute[233724]: 2025-11-29 08:05:46.953 233728 DEBUG nova.storage.rbd_utils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] flattening images/36c33ac0-d290-4825-8d63-52b362736e81 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:47 np0005539552 nova_compute[233724]: 2025-11-29 08:05:47.031 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 03:05:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:48.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:48.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 03:05:49 np0005539552 nova_compute[233724]: 2025-11-29 08:05:49.307 233728 DEBUG nova.storage.rbd_utils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] removing snapshot(6106bd308b6f4de7b1d8ff1256ac70ef) on rbd image(afe3bf93-f54a-4a11-8f71-87b27ee7290a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:05:50 np0005539552 nova_compute[233724]: 2025-11-29 08:05:50.007 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 03:05:50 np0005539552 nova_compute[233724]: 2025-11-29 08:05:50.538 233728 DEBUG nova.storage.rbd_utils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] creating snapshot(snap) on rbd image(36c33ac0-d290-4825-8d63-52b362736e81) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:50.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:50.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 03:05:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:05:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.3 total, 600.0 interval#012Cumulative writes: 25K writes, 103K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8275 syncs, 3.03 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 55.47 MB, 0.09 MB/s#012Interval WAL: 13K writes, 5012 syncs, 2.63 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:05:52 np0005539552 nova_compute[233724]: 2025-11-29 08:05:52.034 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:52.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:05:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:52.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:05:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.132 233728 INFO nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Snapshot image upload complete#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.133 233728 DEBUG nova.compute.manager [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.181 233728 INFO nova.compute.manager [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Shelve offloading#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.188 233728 INFO nova.virt.libvirt.driver [-] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance destroyed successfully.#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.188 233728 DEBUG nova.compute.manager [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.191 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.191 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:54 np0005539552 nova_compute[233724]: 2025-11-29 08:05:54.191 233728 DEBUG nova.network.neutron [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:54.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:54.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:55 np0005539552 nova_compute[233724]: 2025-11-29 08:05:55.009 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 03:05:56 np0005539552 nova_compute[233724]: 2025-11-29 08:05:56.111 233728 DEBUG nova.network.neutron [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Updating instance_info_cache with network_info: [{"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:56 np0005539552 nova_compute[233724]: 2025-11-29 08:05:56.137 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 03:05:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:56.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:56.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.039 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.253 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403542.2529917, afe3bf93-f54a-4a11-8f71-87b27ee7290a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.254 233728 INFO nova.compute.manager [-] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.642 233728 DEBUG nova.compute.manager [None req-0faba4e4-67f8-466b-b229-b95075b0c52f - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.645 233728 DEBUG nova.compute.manager [None req-0faba4e4-67f8-466b-b229-b95075b0c52f - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.699 233728 INFO nova.compute.manager [None req-0faba4e4-67f8-466b-b229-b95075b0c52f - - - - - -] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.769 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.770 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.771 233728 INFO nova.compute.manager [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Unshelving#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.878 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.878 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.883 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'pci_requests' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.899 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'numa_topology' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.923 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:57 np0005539552 nova_compute[233724]: 2025-11-29 08:05:57.924 233728 INFO nova.compute.claims [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.050 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/305217323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.501 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.507 233728 DEBUG nova.compute.provider_tree [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.522 233728 DEBUG nova.scheduler.client.report [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.551 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:58.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.728 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.729 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquired lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.729 233728 DEBUG nova.network.neutron [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.779 233728 INFO nova.virt.libvirt.driver [-] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Instance destroyed successfully.#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.779 233728 DEBUG nova.objects.instance [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid afe3bf93-f54a-4a11-8f71-87b27ee7290a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.794 233728 DEBUG nova.virt.libvirt.vif [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1666023155',display_name='tempest-DeleteServersTestJSON-server-1666023155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1666023155',id=57,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-ttplysom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member',shelved_at='2025-11-29T08:05:54.133193',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='36c33ac0-d290-4825-8d63-52b362736e81'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:42Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=afe3bf93-f54a-4a11-8f71-87b27ee7290a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.795 233728 DEBUG nova.network.os_vif_util [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.796 233728 DEBUG nova.network.os_vif_util [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.796 233728 DEBUG os_vif [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.798 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.799 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc792eb5-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.801 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.802 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.805 233728 INFO os_vif [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:eb:35,bridge_name='br-int',has_traffic_filtering=True,id=dc792eb5-e18c-4ca9-9940-31d2322e3c3c,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc792eb5-e1')#033[00m
Nov 29 03:05:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:05:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:58.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:58 np0005539552 nova_compute[233724]: 2025-11-29 08:05:58.901 233728 DEBUG nova.network.neutron [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.476 233728 DEBUG nova.network.neutron [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.496 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Releasing lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.497 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.498 233728 INFO nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating image(s)#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.524 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.527 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.572 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.599 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.602 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "bd7cf3598f9f5d0f5820a770e40ef059b1b29c69" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.603 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "bd7cf3598f9f5d0f5820a770e40ef059b1b29c69" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.734 233728 DEBUG nova.compute.manager [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Received event network-changed-dc792eb5-e18c-4ca9-9940-31d2322e3c3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.735 233728 DEBUG nova.compute.manager [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Refreshing instance network info cache due to event network-changed-dc792eb5-e18c-4ca9-9940-31d2322e3c3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.735 233728 DEBUG oslo_concurrency.lockutils [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.735 233728 DEBUG oslo_concurrency.lockutils [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.735 233728 DEBUG nova.network.neutron [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Refreshing network info cache for port dc792eb5-e18c-4ca9-9940-31d2322e3c3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.872 233728 DEBUG nova.virt.libvirt.imagebackend [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f953a29b-1cff-4723-9bbc-02d6d0bd3151/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f953a29b-1cff-4723-9bbc-02d6d0bd3151/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.918 233728 INFO nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Deleting instance files /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a_del#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.919 233728 INFO nova.virt.libvirt.driver [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Deletion of /var/lib/nova/instances/afe3bf93-f54a-4a11-8f71-87b27ee7290a_del complete#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.924 233728 DEBUG nova.virt.libvirt.imagebackend [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f953a29b-1cff-4723-9bbc-02d6d0bd3151/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:05:59 np0005539552 nova_compute[233724]: 2025-11-29 08:05:59.924 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] cloning images/f953a29b-1cff-4723-9bbc-02d6d0bd3151@snap to None/59628432-68dc-48d9-8986-8511c376a62d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.011 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.042 233728 INFO nova.scheduler.client.report [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance afe3bf93-f54a-4a11-8f71-87b27ee7290a#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.051 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "bd7cf3598f9f5d0f5820a770e40ef059b1b29c69" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.126 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.127 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.171 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'migration_context' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.232 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] flattening vms/59628432-68dc-48d9-8986-8511c376a62d_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.272 233728 DEBUG oslo_concurrency.processutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1432551729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.759 233728 DEBUG oslo_concurrency.processutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.766 233728 DEBUG nova.compute.provider_tree [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.786 233728 DEBUG nova.scheduler.client.report [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.807 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:00 np0005539552 nova_compute[233724]: 2025-11-29 08:06:00.861 233728 DEBUG oslo_concurrency.lockutils [None req-d977eb7e-9324-4947-a431-80345ac87485 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "afe3bf93-f54a-4a11-8f71-87b27ee7290a" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 42.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.034 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Image rbd:vms/59628432-68dc-48d9-8986-8511c376a62d_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.035 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.036 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Ensure instance console log exists: /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.036 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.037 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.037 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.038 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:05:28Z,direct_url=<?>,disk_format='raw',id=f953a29b-1cff-4723-9bbc-02d6d0bd3151,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-106050071-shelved',owner='49ee945ea42e47ad9f070078a4d5179b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:05:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.043 233728 WARNING nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.056 233728 DEBUG nova.virt.libvirt.host [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.056 233728 DEBUG nova.virt.libvirt.host [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.059 233728 DEBUG nova.virt.libvirt.host [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.059 233728 DEBUG nova.virt.libvirt.host [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.061 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.061 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:05:28Z,direct_url=<?>,disk_format='raw',id=f953a29b-1cff-4723-9bbc-02d6d0bd3151,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-106050071-shelved',owner='49ee945ea42e47ad9f070078a4d5179b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:05:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.062 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.062 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.063 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.064 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.064 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.065 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.065 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.067 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.067 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.068 233728 DEBUG nova.virt.hardware [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.068 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.087 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.167 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.168 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.184 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.227 233728 DEBUG nova.network.neutron [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Updated VIF entry in instance network info cache for port dc792eb5-e18c-4ca9-9940-31d2322e3c3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.228 233728 DEBUG nova.network.neutron [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: afe3bf93-f54a-4a11-8f71-87b27ee7290a] Updating instance_info_cache with network_info: [{"id": "dc792eb5-e18c-4ca9-9940-31d2322e3c3c", "address": "fa:16:3e:39:eb:35", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": null, "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapdc792eb5-e1", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.254 233728 DEBUG oslo_concurrency.lockutils [req-12ae303e-b8f4-4190-9e89-e416c2e179eb req-0d191da5-d80c-4859-8356-9b200ea714d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-afe3bf93-f54a-4a11-8f71-87b27ee7290a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.265 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.266 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.272 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.272 233728 INFO nova.compute.claims [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:06:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.412 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/555370581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.539 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.570 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.575 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4279544881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.864 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.870 233728 DEBUG nova.compute.provider_tree [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.886 233728 DEBUG nova.scheduler.client.report [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.910 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.911 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.982 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:01 np0005539552 nova_compute[233724]: 2025-11-29 08:06:01.983 233728 DEBUG nova.network.neutron [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.014 233728 INFO nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2927296645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.051 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.053 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.068 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.073 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <uuid>59628432-68dc-48d9-8986-8511c376a62d</uuid>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <name>instance-00000037</name>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-106050071</nova:name>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:06:01</nova:creationTime>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:user uuid="dc44b9aeabb442f582688b672dd724f3">tempest-UnshelveToHostMultiNodesTest-1345262936-project-member</nova:user>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <nova:project uuid="49ee945ea42e47ad9f070078a4d5179b">tempest-UnshelveToHostMultiNodesTest-1345262936</nova:project>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="f953a29b-1cff-4723-9bbc-02d6d0bd3151"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <entry name="serial">59628432-68dc-48d9-8986-8511c376a62d</entry>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <entry name="uuid">59628432-68dc-48d9-8986-8511c376a62d</entry>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/59628432-68dc-48d9-8986-8511c376a62d_disk">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/59628432-68dc-48d9-8986-8511c376a62d_disk.config">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/console.log" append="off"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:06:02 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:06:02 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:06:02 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:06:02 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.156 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.157 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.158 233728 INFO nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Using config drive#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.181 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.188 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.189 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.189 233728 INFO nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Creating image(s)#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.211 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.233 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.259 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.262 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "342cd1df246e2b6b6275a767890ec6551ae9dfb2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.263 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "342cd1df246e2b6b6275a767890ec6551ae9dfb2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.268 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.312 233728 DEBUG nova.objects.instance [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'keypairs' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.380 233728 DEBUG nova.policy [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fddc5f5801764ee19d5253e2cab34df3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '638fd52fccf14f16b56d0860553063f3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.445 233728 INFO nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating config drive at /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.450 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjnpm_mb6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.494 233728 DEBUG nova.virt.libvirt.imagebackend [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/37d12ee4-89a4-499b-beea-6b6d4e46474b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/37d12ee4-89a4-499b-beea-6b6d4e46474b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:06:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:02.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.880 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjnpm_mb6" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.909 233728 DEBUG nova.storage.rbd_utils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.912 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config 59628432-68dc-48d9-8986-8511c376a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.944 233728 DEBUG nova.virt.libvirt.imagebackend [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/37d12ee4-89a4-499b-beea-6b6d4e46474b/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.946 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] cloning images/37d12ee4-89a4-499b-beea-6b6d4e46474b@snap to None/68ba4333-b460-437d-97a0-6c7feff2c4bc_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:06:02 np0005539552 nova_compute[233724]: 2025-11-29 08:06:02.979 233728 DEBUG nova.network.neutron [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Successfully created port: 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.060 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "342cd1df246e2b6b6275a767890ec6551ae9dfb2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.179 233728 DEBUG nova.objects.instance [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'migration_context' on Instance uuid 68ba4333-b460-437d-97a0-6c7feff2c4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.191 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.192 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Ensure instance console log exists: /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.192 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.193 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.193 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.739 233728 DEBUG oslo_concurrency.processutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config 59628432-68dc-48d9-8986-8511c376a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.827s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.740 233728 INFO nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deleting local config drive /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:03 np0005539552 nova_compute[233724]: 2025-11-29 08:06:03.801 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:03 np0005539552 systemd-machined[196379]: New machine qemu-21-instance-00000037.
Nov 29 03:06:03 np0005539552 systemd[1]: Started Virtual Machine qemu-21-instance-00000037.
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.100 233728 DEBUG nova.network.neutron [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Successfully updated port: 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.114 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "refresh_cache-68ba4333-b460-437d-97a0-6c7feff2c4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.114 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquired lock "refresh_cache-68ba4333-b460-437d-97a0-6c7feff2c4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.114 233728 DEBUG nova.network.neutron [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.181 233728 DEBUG nova.compute.manager [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-changed-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.182 233728 DEBUG nova.compute.manager [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Refreshing instance network info cache due to event network-changed-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.182 233728 DEBUG oslo_concurrency.lockutils [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-68ba4333-b460-437d-97a0-6c7feff2c4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.225 233728 DEBUG nova.compute.manager [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.225 233728 DEBUG nova.virt.libvirt.driver [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.226 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403564.2248547, 59628432-68dc-48d9-8986-8511c376a62d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.226 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.232 233728 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance spawned successfully.#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.249 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.253 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.273 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.274 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403564.2258513, 59628432-68dc-48d9-8986-8511c376a62d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.274 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.293 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.296 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.308 233728 DEBUG nova.network.neutron [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:04 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.323 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:04.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 03:06:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:04.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:04.996 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:04.998 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:06:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:04.998 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:04.995 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.013 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.095 233728 DEBUG nova.network.neutron [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Updating instance_info_cache with network_info: [{"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.122 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Releasing lock "refresh_cache-68ba4333-b460-437d-97a0-6c7feff2c4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.123 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Instance network_info: |[{"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.124 233728 DEBUG oslo_concurrency.lockutils [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-68ba4333-b460-437d-97a0-6c7feff2c4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.125 233728 DEBUG nova.network.neutron [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Refreshing network info cache for port 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.128 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Start _get_guest_xml network_info=[{"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:05:47Z,direct_url=<?>,disk_format='raw',id=37d12ee4-89a4-499b-beea-6b6d4e46474b,min_disk=1,min_ram=0,name='tempest-test-snap-1148249820',owner='638fd52fccf14f16b56d0860553063f3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:05:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '37d12ee4-89a4-499b-beea-6b6d4e46474b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.140 233728 WARNING nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.149 233728 DEBUG nova.virt.libvirt.host [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.151 233728 DEBUG nova.virt.libvirt.host [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.154 233728 DEBUG nova.virt.libvirt.host [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.155 233728 DEBUG nova.virt.libvirt.host [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.157 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.157 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:05:47Z,direct_url=<?>,disk_format='raw',id=37d12ee4-89a4-499b-beea-6b6d4e46474b,min_disk=1,min_ram=0,name='tempest-test-snap-1148249820',owner='638fd52fccf14f16b56d0860553063f3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:05:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.158 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.158 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.158 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.159 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.159 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.161 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.161 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.162 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.162 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.163 233728 DEBUG nova.virt.hardware [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.170 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2451287061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.629 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.654 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:05 np0005539552 nova_compute[233724]: 2025-11-29 08:06:05.659 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3012604986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.101 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.103 233728 DEBUG nova.virt.libvirt.vif [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1739497414',display_name='tempest-ImagesTestJSON-server-1739497414',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-1739497414',id=61,image_ref='37d12ee4-89a4-499b-beea-6b6d4e46474b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-n28dk4or',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7c7286ee-7432-4e68-a574-bbe535d1f203',image_min_disk='1',image_min_ram='0',image_owner_id='638fd52fccf14f16b56d0860553063f3',image_owner_project_name='tempest-ImagesTestJSON-1682881466',image_owner_user_name='tempest-ImagesTestJSON-1682881466-project-member',image_user_id='fddc5f5801764ee19d5253e2cab34df3',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:02Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=68ba4333-b460-437d-97a0-6c7feff2c4bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.104 233728 DEBUG nova.network.os_vif_util [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.104 233728 DEBUG nova.network.os_vif_util [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.107 233728 DEBUG nova.objects.instance [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68ba4333-b460-437d-97a0-6c7feff2c4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.123 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <uuid>68ba4333-b460-437d-97a0-6c7feff2c4bc</uuid>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <name>instance-0000003d</name>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:name>tempest-ImagesTestJSON-server-1739497414</nova:name>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:06:05</nova:creationTime>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:user uuid="fddc5f5801764ee19d5253e2cab34df3">tempest-ImagesTestJSON-1682881466-project-member</nova:user>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:project uuid="638fd52fccf14f16b56d0860553063f3">tempest-ImagesTestJSON-1682881466</nova:project>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="37d12ee4-89a4-499b-beea-6b6d4e46474b"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <nova:port uuid="82fc5bc2-cc1e-4b00-8bf7-2c63268650cf">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <entry name="serial">68ba4333-b460-437d-97a0-6c7feff2c4bc</entry>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <entry name="uuid">68ba4333-b460-437d-97a0-6c7feff2c4bc</entry>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/68ba4333-b460-437d-97a0-6c7feff2c4bc_disk">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/68ba4333-b460-437d-97a0-6c7feff2c4bc_disk.config">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:12:32:f4"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <target dev="tap82fc5bc2-cc"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/console.log" append="off"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:06:06 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:06:06 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:06:06 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:06:06 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.124 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Preparing to wait for external event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.125 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.125 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.125 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.126 233728 DEBUG nova.virt.libvirt.vif [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1739497414',display_name='tempest-ImagesTestJSON-server-1739497414',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-1739497414',id=61,image_ref='37d12ee4-89a4-499b-beea-6b6d4e46474b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-n28dk4or',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7c7286ee-7432-4e68-a574-bbe535d1f203',image_min_disk='1',image_min_ram='0',image_owner_id='638fd52fccf14f16b56d0860553063f3',image_owner_project_name='tempest-ImagesTestJSON-1682881466',image_owner_user_name='tempest-ImagesTestJSON-1682881466-project-member',image_user_id='fddc5f5801764ee19d5253e2cab34df3',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:02Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=68ba4333-b460-437d-97a0-6c7feff2c4bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.126 233728 DEBUG nova.network.os_vif_util [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.127 233728 DEBUG nova.network.os_vif_util [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.128 233728 DEBUG os_vif [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.129 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.129 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.129 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.132 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82fc5bc2-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.132 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82fc5bc2-cc, col_values=(('external_ids', {'iface-id': '82fc5bc2-cc1e-4b00-8bf7-2c63268650cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:32:f4', 'vm-uuid': '68ba4333-b460-437d-97a0-6c7feff2c4bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.134 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539552 NetworkManager[48926]: <info>  [1764403566.1350] manager: (tap82fc5bc2-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.138 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.140 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.140 233728 INFO os_vif [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc')#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.202 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.203 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.203 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No VIF found with MAC fa:16:3e:12:32:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.204 233728 INFO nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Using config drive#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.228 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.529 233728 DEBUG nova.compute.manager [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.602 233728 DEBUG oslo_concurrency.lockutils [None req-597698b0-af5a-47e5-8dbb-1485bb5a07d3 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.609 233728 INFO nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Creating config drive at /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/disk.config#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.614 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtpzmd0i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.672 233728 DEBUG nova.network.neutron [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Updated VIF entry in instance network info cache for port 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.673 233728 DEBUG nova.network.neutron [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Updating instance_info_cache with network_info: [{"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.696 233728 DEBUG oslo_concurrency.lockutils [req-d9b14b3e-0836-44e1-9a5c-9459bf0ff8fd req-b9ca51cb-38b6-44bb-9128-0352e794b0cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-68ba4333-b460-437d-97a0-6c7feff2c4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.745 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtpzmd0i" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.781 233728 DEBUG nova.storage.rbd_utils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:06 np0005539552 nova_compute[233724]: 2025-11-29 08:06:06.787 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/disk.config 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:06.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.785 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.787 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.787 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.788 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.788 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.790 233728 INFO nova.compute.manager [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Terminating instance#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.791 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.792 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquired lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.792 233728 DEBUG nova.network.neutron [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:07 np0005539552 nova_compute[233724]: 2025-11-29 08:06:07.998 233728 DEBUG nova.network.neutron [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.284 233728 DEBUG oslo_concurrency.processutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/disk.config 68ba4333-b460-437d-97a0-6c7feff2c4bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.285 233728 INFO nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Deleting local config drive /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.289 233728 DEBUG nova.network.neutron [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.312 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Releasing lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.313 233728 DEBUG nova.compute.manager [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:08 np0005539552 kernel: tap82fc5bc2-cc: entered promiscuous mode
Nov 29 03:06:08 np0005539552 NetworkManager[48926]: <info>  [1764403568.3279] manager: (tap82fc5bc2-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.327 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:08Z|00182|binding|INFO|Claiming lport 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf for this chassis.
Nov 29 03:06:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:08Z|00183|binding|INFO|82fc5bc2-cc1e-4b00-8bf7-2c63268650cf: Claiming fa:16:3e:12:32:f4 10.100.0.4
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.335 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:32:f4 10.100.0.4'], port_security=['fa:16:3e:12:32:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '68ba4333-b460-437d-97a0-6c7feff2c4bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.336 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 bound to our chassis#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.338 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f01d29c1-afcb-4909-9abf-f7d31e4549d8#033[00m
Nov 29 03:06:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:08Z|00184|binding|INFO|Setting lport 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf ovn-installed in OVS
Nov 29 03:06:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:08Z|00185|binding|INFO|Setting lport 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf up in Southbound
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.348 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.349 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b37a2d84-a11a-4bd6-af0b-40c6e3ba12b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.350 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf01d29c1-a1 in ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.351 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.352 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf01d29c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.352 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[75392beb-6f5a-4974-bc8a-855f35fdf210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.353 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[15d3064b-08f1-4c4b-a65e-47980a80c84e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 systemd-udevd[258651]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:08 np0005539552 systemd-machined[196379]: New machine qemu-22-instance-0000003d.
Nov 29 03:06:08 np0005539552 systemd[1]: Started Virtual Machine qemu-22-instance-0000003d.
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.367 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[28204bc2-7eea-4d01-9da5-d5fc10e43b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 NetworkManager[48926]: <info>  [1764403568.3703] device (tap82fc5bc2-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:08 np0005539552 NetworkManager[48926]: <info>  [1764403568.3711] device (tap82fc5bc2-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.390 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8700f127-2ea6-4eed-92f8-28a2dfa73206]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 29 03:06:08 np0005539552 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000037.scope: Consumed 4.606s CPU time.
Nov 29 03:06:08 np0005539552 systemd-machined[196379]: Machine qemu-21-instance-00000037 terminated.
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.416 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[da951304-3ba0-414b-ba51-c0e11bdcb117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 NetworkManager[48926]: <info>  [1764403568.4226] manager: (tapf01d29c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.422 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[db894de8-38e9-4671-b4f5-0dbf3a09f0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.451 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[dbddd64e-584c-400f-a961-405e9c762c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.454 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6b85daab-25d7-4446-b3d3-5e8e235e0b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 NetworkManager[48926]: <info>  [1764403568.4745] device (tapf01d29c1-a0): carrier: link connected
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.479 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[05a386fa-badb-4dce-9a57-61fa4b3ff688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.495 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[88e4ebbd-16a4-4fb1-987d-c5e4019d157d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661416, 'reachable_time': 26794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258683, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.510 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[39f7727c-82bb-479f-8b58-bab9b223bdea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:77b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661416, 'tstamp': 661416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258684, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.529 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[01661a3b-278d-4361-b1e5-c2af0c294726]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661416, 'reachable_time': 26794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258685, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.534 233728 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.534 233728 DEBUG nova.objects.instance [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'resources' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.561 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[08fbccd6-3edb-448c-8252-e6430292ed26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:08.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.617 233728 DEBUG nova.compute.manager [req-d8680d5a-824d-4cae-91ab-a1f6adf145d7 req-899ff651-b2fc-483a-8f05-8daaeb6e793f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.617 233728 DEBUG oslo_concurrency.lockutils [req-d8680d5a-824d-4cae-91ab-a1f6adf145d7 req-899ff651-b2fc-483a-8f05-8daaeb6e793f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.618 233728 DEBUG oslo_concurrency.lockutils [req-d8680d5a-824d-4cae-91ab-a1f6adf145d7 req-899ff651-b2fc-483a-8f05-8daaeb6e793f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.618 233728 DEBUG oslo_concurrency.lockutils [req-d8680d5a-824d-4cae-91ab-a1f6adf145d7 req-899ff651-b2fc-483a-8f05-8daaeb6e793f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.618 233728 DEBUG nova.compute.manager [req-d8680d5a-824d-4cae-91ab-a1f6adf145d7 req-899ff651-b2fc-483a-8f05-8daaeb6e793f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Processing event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.626 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e32e5a-956f-42f2-a032-476ed1aaad00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.628 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.629 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.629 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01d29c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.631 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 NetworkManager[48926]: <info>  [1764403568.6321] manager: (tapf01d29c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 03:06:08 np0005539552 kernel: tapf01d29c1-a0: entered promiscuous mode
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.636 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf01d29c1-a0, col_values=(('external_ids', {'iface-id': '2247adf2-4048-41de-ba3c-ac69d728838f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.637 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:08Z|00186|binding|INFO|Releasing lport 2247adf2-4048-41de-ba3c-ac69d728838f from this chassis (sb_readonly=0)
Nov 29 03:06:08 np0005539552 nova_compute[233724]: 2025-11-29 08:06:08.655 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.657 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.659 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f457ec6e-311e-4c01-8c2d-89922ce5a9b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.659 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:08.662 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'env', 'PROCESS_TAG=haproxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f01d29c1-afcb-4909-9abf-f7d31e4549d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:08.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:09 np0005539552 podman[258739]: 2025-11-29 08:06:09.041977117 +0000 UTC m=+0.054860672 container create da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:06:09 np0005539552 systemd[1]: Started libpod-conmon-da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320.scope.
Nov 29 03:06:09 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:06:09 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4086295237298958b0844f2bbc4085fbde93067453ba28da482576f70d7455/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:09 np0005539552 podman[258739]: 2025-11-29 08:06:09.109229712 +0000 UTC m=+0.122113277 container init da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:09 np0005539552 podman[258739]: 2025-11-29 08:06:09.015333958 +0000 UTC m=+0.028217533 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:09 np0005539552 podman[258739]: 2025-11-29 08:06:09.114888745 +0000 UTC m=+0.127772290 container start da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:06:09 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [NOTICE]   (258759) : New worker (258761) forked
Nov 29 03:06:09 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [NOTICE]   (258759) : Loading success.
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.616 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.617 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403569.6157992, 68ba4333-b460-437d-97a0-6c7feff2c4bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.617 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.620 233728 DEBUG nova.virt.libvirt.driver [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.624 233728 INFO nova.virt.libvirt.driver [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Instance spawned successfully.#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.624 233728 INFO nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Took 7.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.625 233728 DEBUG nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.635 233728 INFO nova.virt.libvirt.driver [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deleting instance files /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d_del#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.636 233728 INFO nova.virt.libvirt.driver [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deletion of /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d_del complete#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.664 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.668 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.742 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.742 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403569.6159196, 68ba4333-b460-437d-97a0-6c7feff2c4bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.743 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.756 233728 INFO nova.compute.manager [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Took 8.51 seconds to build instance.#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.761 233728 INFO nova.compute.manager [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.761 233728 DEBUG oslo.service.loopingcall [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.762 233728 DEBUG nova.compute.manager [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.762 233728 DEBUG nova.network.neutron [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.776 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.781 233728 DEBUG oslo_concurrency.lockutils [None req-9ba26bcd-97c8-4e4d-bf8c-09fb4f9b8df0 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.782 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403569.6194935, 68ba4333-b460-437d-97a0-6c7feff2c4bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.783 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.810 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.813 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.918 233728 DEBUG nova.network.neutron [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.931 233728 DEBUG nova.network.neutron [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:09 np0005539552 nova_compute[233724]: 2025-11-29 08:06:09.948 233728 INFO nova.compute.manager [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Took 0.19 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.005 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.006 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.015 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.074 233728 DEBUG oslo_concurrency.processutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1714211995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.535 233728 DEBUG oslo_concurrency.processutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.540 233728 DEBUG nova.compute.provider_tree [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.570 233728 DEBUG nova.scheduler.client.report [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.604 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:10.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.727 233728 INFO nova.scheduler.client.report [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Deleted allocations for instance 59628432-68dc-48d9-8986-8511c376a62d#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.794 233728 DEBUG nova.compute.manager [req-8e657c0c-0a03-4757-907a-07dedd91cfe8 req-2193c571-dfcb-430c-98c9-d80cb88441e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.795 233728 DEBUG oslo_concurrency.lockutils [req-8e657c0c-0a03-4757-907a-07dedd91cfe8 req-2193c571-dfcb-430c-98c9-d80cb88441e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.796 233728 DEBUG oslo_concurrency.lockutils [req-8e657c0c-0a03-4757-907a-07dedd91cfe8 req-2193c571-dfcb-430c-98c9-d80cb88441e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.796 233728 DEBUG oslo_concurrency.lockutils [req-8e657c0c-0a03-4757-907a-07dedd91cfe8 req-2193c571-dfcb-430c-98c9-d80cb88441e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.796 233728 DEBUG nova.compute.manager [req-8e657c0c-0a03-4757-907a-07dedd91cfe8 req-2193c571-dfcb-430c-98c9-d80cb88441e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] No waiting events found dispatching network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.797 233728 WARNING nova.compute.manager [req-8e657c0c-0a03-4757-907a-07dedd91cfe8 req-2193c571-dfcb-430c-98c9-d80cb88441e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received unexpected event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:10 np0005539552 nova_compute[233724]: 2025-11-29 08:06:10.806 233728 DEBUG oslo_concurrency.lockutils [None req-cc731d55-2fae-43e1-af90-b9ac1b793d08 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:10.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:11 np0005539552 nova_compute[233724]: 2025-11-29 08:06:11.135 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 03:06:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.550 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.551 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.552 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.552 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.553 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.554 233728 INFO nova.compute.manager [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Terminating instance#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.555 233728 DEBUG nova.compute.manager [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:12 np0005539552 kernel: tap82fc5bc2-cc (unregistering): left promiscuous mode
Nov 29 03:06:12 np0005539552 NetworkManager[48926]: <info>  [1764403572.6014] device (tap82fc5bc2-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:12Z|00187|binding|INFO|Releasing lport 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf from this chassis (sb_readonly=0)
Nov 29 03:06:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:12Z|00188|binding|INFO|Setting lport 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf down in Southbound
Nov 29 03:06:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:12Z|00189|binding|INFO|Removing iface tap82fc5bc2-cc ovn-installed in OVS
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.605 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.613 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:32:f4 10.100.0.4'], port_security=['fa:16:3e:12:32:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '68ba4333-b460-437d-97a0-6c7feff2c4bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.615 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 82fc5bc2-cc1e-4b00-8bf7-2c63268650cf in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 unbound from our chassis#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.616 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f01d29c1-afcb-4909-9abf-f7d31e4549d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.617 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6adac0-5b48-4085-a67f-c9c23a7aeb3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.617 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace which is not needed anymore#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 29 03:06:12 np0005539552 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000003d.scope: Consumed 4.451s CPU time.
Nov 29 03:06:12 np0005539552 systemd-machined[196379]: Machine qemu-22-instance-0000003d terminated.
Nov 29 03:06:12 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [NOTICE]   (258759) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:12 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [NOTICE]   (258759) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:12 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [WARNING]  (258759) : Exiting Master process...
Nov 29 03:06:12 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [ALERT]    (258759) : Current worker (258761) exited with code 143 (Terminated)
Nov 29 03:06:12 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[258755]: [WARNING]  (258759) : All workers exited. Exiting... (0)
Nov 29 03:06:12 np0005539552 systemd[1]: libpod-da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320.scope: Deactivated successfully.
Nov 29 03:06:12 np0005539552 podman[258911]: 2025-11-29 08:06:12.741131762 +0000 UTC m=+0.042233171 container died da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:06:12 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:12 np0005539552 systemd[1]: var-lib-containers-storage-overlay-6c4086295237298958b0844f2bbc4085fbde93067453ba28da482576f70d7455-merged.mount: Deactivated successfully.
Nov 29 03:06:12 np0005539552 podman[258911]: 2025-11-29 08:06:12.782790767 +0000 UTC m=+0.083892176 container cleanup da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:12 np0005539552 systemd[1]: libpod-conmon-da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320.scope: Deactivated successfully.
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.792 233728 INFO nova.virt.libvirt.driver [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Instance destroyed successfully.#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.793 233728 DEBUG nova.objects.instance [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'resources' on Instance uuid 68ba4333-b460-437d-97a0-6c7feff2c4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.808 233728 DEBUG nova.virt.libvirt.vif [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1739497414',display_name='tempest-ImagesTestJSON-server-1739497414',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-1739497414',id=61,image_ref='37d12ee4-89a4-499b-beea-6b6d4e46474b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-n28dk4or',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7c7286ee-7432-4e68-a574-bbe535d1f203',image_min_disk='1',image_min_ram='0',image_owner_id='638fd52fccf14f16b56d0860553063f3',image_owner_project_name='tempest-ImagesTestJSON-1682881466',image_owner_user_name='tempest-ImagesTestJSON-1682881466-project-member',image_user_id='fddc5f5801764ee19d5253e2cab34df3',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:09Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=68ba4333-b460-437d-97a0-6c7feff2c4bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.809 233728 DEBUG nova.network.os_vif_util [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "address": "fa:16:3e:12:32:f4", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82fc5bc2-cc", "ovs_interfaceid": "82fc5bc2-cc1e-4b00-8bf7-2c63268650cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.809 233728 DEBUG nova.network.os_vif_util [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.810 233728 DEBUG os_vif [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.811 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.811 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82fc5bc2-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.814 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.816 233728 INFO os_vif [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:32:f4,bridge_name='br-int',has_traffic_filtering=True,id=82fc5bc2-cc1e-4b00-8bf7-2c63268650cf,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82fc5bc2-cc')#033[00m
Nov 29 03:06:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:12.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:12 np0005539552 podman[258949]: 2025-11-29 08:06:12.848495361 +0000 UTC m=+0.043841315 container remove da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.854 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b69b0a5-32d6-45f9-be26-71abc3b96622]: (4, ('Sat Nov 29 08:06:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320)\nda088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320\nSat Nov 29 08:06:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (da088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320)\nda088acf3764e7d358487bc8423e687178d2e9e82169c9d82aef27d006010320\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.855 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0979bdc8-54c0-469d-b7b0-7331e2bc11d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.857 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:12 np0005539552 kernel: tapf01d29c1-a0: left promiscuous mode
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.876 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f3080-ac9a-46cd-890b-e3eb9160fcf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.893 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3863571d-91dc-4361-831a-31eb1d4a2674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.894 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b277ccb-8d18-4cee-aedb-263c1e09dffc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.910 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9e85b026-8343-4c1c-a67f-0af7ebc70d6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661409, 'reachable_time': 16812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258983, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.914 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:12 np0005539552 systemd[1]: run-netns-ovnmeta\x2df01d29c1\x2dafcb\x2d4909\x2d9abf\x2df7d31e4549d8.mount: Deactivated successfully.
Nov 29 03:06:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:12.914 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6d1ac8-dae2-4797-bcbc-5cda4569f8ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.966 233728 DEBUG nova.compute.manager [req-0b04c3eb-af53-42ee-a708-e0a8fac516d1 req-4c680812-2b87-4802-97cb-6603159546d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-vif-unplugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.967 233728 DEBUG oslo_concurrency.lockutils [req-0b04c3eb-af53-42ee-a708-e0a8fac516d1 req-4c680812-2b87-4802-97cb-6603159546d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.967 233728 DEBUG oslo_concurrency.lockutils [req-0b04c3eb-af53-42ee-a708-e0a8fac516d1 req-4c680812-2b87-4802-97cb-6603159546d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.967 233728 DEBUG oslo_concurrency.lockutils [req-0b04c3eb-af53-42ee-a708-e0a8fac516d1 req-4c680812-2b87-4802-97cb-6603159546d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.967 233728 DEBUG nova.compute.manager [req-0b04c3eb-af53-42ee-a708-e0a8fac516d1 req-4c680812-2b87-4802-97cb-6603159546d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] No waiting events found dispatching network-vif-unplugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:12 np0005539552 nova_compute[233724]: 2025-11-29 08:06:12.968 233728 DEBUG nova.compute.manager [req-0b04c3eb-af53-42ee-a708-e0a8fac516d1 req-4c680812-2b87-4802-97cb-6603159546d9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-vif-unplugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:13 np0005539552 nova_compute[233724]: 2025-11-29 08:06:13.281 233728 INFO nova.virt.libvirt.driver [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Deleting instance files /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc_del#033[00m
Nov 29 03:06:13 np0005539552 nova_compute[233724]: 2025-11-29 08:06:13.282 233728 INFO nova.virt.libvirt.driver [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Deletion of /var/lib/nova/instances/68ba4333-b460-437d-97a0-6c7feff2c4bc_del complete#033[00m
Nov 29 03:06:13 np0005539552 nova_compute[233724]: 2025-11-29 08:06:13.348 233728 INFO nova.compute.manager [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:13 np0005539552 nova_compute[233724]: 2025-11-29 08:06:13.349 233728 DEBUG oslo.service.loopingcall [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:13 np0005539552 nova_compute[233724]: 2025-11-29 08:06:13.349 233728 DEBUG nova.compute.manager [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:13 np0005539552 nova_compute[233724]: 2025-11-29 08:06:13.349 233728 DEBUG nova.network.neutron [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:14 np0005539552 nova_compute[233724]: 2025-11-29 08:06:14.578 233728 DEBUG nova.network.neutron [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:14 np0005539552 nova_compute[233724]: 2025-11-29 08:06:14.597 233728 INFO nova.compute.manager [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:14.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:14 np0005539552 nova_compute[233724]: 2025-11-29 08:06:14.649 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:14 np0005539552 nova_compute[233724]: 2025-11-29 08:06:14.650 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:14 np0005539552 nova_compute[233724]: 2025-11-29 08:06:14.704 233728 DEBUG oslo_concurrency.processutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:14.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.016 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/617367910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.148 233728 DEBUG oslo_concurrency.processutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.154 233728 DEBUG nova.compute.provider_tree [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.172 233728 DEBUG nova.scheduler.client.report [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.194 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.217 233728 INFO nova.scheduler.client.report [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Deleted allocations for instance 68ba4333-b460-437d-97a0-6c7feff2c4bc#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.292 233728 DEBUG oslo_concurrency.lockutils [None req-6c01c0c3-9573-4827-b02a-1b821a0d6dc6 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.301 233728 DEBUG nova.compute.manager [req-02694cba-057f-40ee-8986-0901c66483d2 req-f19a2bbd-ad98-4b3a-b1fd-e6fccda37b80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.302 233728 DEBUG oslo_concurrency.lockutils [req-02694cba-057f-40ee-8986-0901c66483d2 req-f19a2bbd-ad98-4b3a-b1fd-e6fccda37b80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.302 233728 DEBUG oslo_concurrency.lockutils [req-02694cba-057f-40ee-8986-0901c66483d2 req-f19a2bbd-ad98-4b3a-b1fd-e6fccda37b80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.302 233728 DEBUG oslo_concurrency.lockutils [req-02694cba-057f-40ee-8986-0901c66483d2 req-f19a2bbd-ad98-4b3a-b1fd-e6fccda37b80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68ba4333-b460-437d-97a0-6c7feff2c4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.302 233728 DEBUG nova.compute.manager [req-02694cba-057f-40ee-8986-0901c66483d2 req-f19a2bbd-ad98-4b3a-b1fd-e6fccda37b80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] No waiting events found dispatching network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:15 np0005539552 nova_compute[233724]: 2025-11-29 08:06:15.303 233728 WARNING nova.compute.manager [req-02694cba-057f-40ee-8986-0901c66483d2 req-f19a2bbd-ad98-4b3a-b1fd-e6fccda37b80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received unexpected event network-vif-plugged-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:06:15 np0005539552 podman[259009]: 2025-11-29 08:06:15.958454351 +0000 UTC m=+0.052160859 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:06:15 np0005539552 podman[259008]: 2025-11-29 08:06:15.969482678 +0000 UTC m=+0.063653649 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 03:06:15 np0005539552 podman[259010]: 2025-11-29 08:06:15.983490287 +0000 UTC m=+0.073753172 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:06:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:16.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.841 233728 DEBUG nova.compute.manager [req-491d13b3-acd7-469b-aa4b-7ca6ed39d238 req-62d151e3-9f26-4eb6-9264-c97d8b526089 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Received event network-vif-deleted-82fc5bc2-cc1e-4b00-8bf7-2c63268650cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:06:16 np0005539552 nova_compute[233724]: 2025-11-29 08:06:16.945 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3001438444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.400 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.559 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.560 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4556MB free_disk=20.863048553466797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.560 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.560 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.619 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.619 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.639 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539552 nova_compute[233724]: 2025-11-29 08:06:17.814 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 03:06:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/151423393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:18 np0005539552 nova_compute[233724]: 2025-11-29 08:06:18.101 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:18 np0005539552 nova_compute[233724]: 2025-11-29 08:06:18.110 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:18 np0005539552 nova_compute[233724]: 2025-11-29 08:06:18.136 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:18 np0005539552 nova_compute[233724]: 2025-11-29 08:06:18.160 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:06:18 np0005539552 nova_compute[233724]: 2025-11-29 08:06:18.161 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:20 np0005539552 nova_compute[233724]: 2025-11-29 08:06:20.018 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:20 np0005539552 nova_compute[233724]: 2025-11-29 08:06:20.162 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:20 np0005539552 nova_compute[233724]: 2025-11-29 08:06:20.162 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:20 np0005539552 nova_compute[233724]: 2025-11-29 08:06:20.162 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:20 np0005539552 nova_compute[233724]: 2025-11-29 08:06:20.163 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:20 np0005539552 nova_compute[233724]: 2025-11-29 08:06:20.163 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:06:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:20.615 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:20.615 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:20.615 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:20.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:21 np0005539552 nova_compute[233724]: 2025-11-29 08:06:21.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:22 np0005539552 nova_compute[233724]: 2025-11-29 08:06:22.819 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:22.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:22 np0005539552 nova_compute[233724]: 2025-11-29 08:06:22.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:22 np0005539552 nova_compute[233724]: 2025-11-29 08:06:22.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:06:22 np0005539552 nova_compute[233724]: 2025-11-29 08:06:22.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:06:22 np0005539552 nova_compute[233724]: 2025-11-29 08:06:22.938 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:06:22 np0005539552 nova_compute[233724]: 2025-11-29 08:06:22.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.532 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403568.530805, 59628432-68dc-48d9-8986-8511c376a62d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.533 233728 INFO nova.compute.manager [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.556 233728 DEBUG nova.compute.manager [None req-6a1e385c-b1ea-471c-b013-4dc0c908bc27 - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.758 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.759 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.774 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.839 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.839 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.846 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.847 233728 INFO nova.compute.claims [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:06:23 np0005539552 nova_compute[233724]: 2025-11-29 08:06:23.964 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3494741056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.399 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.405 233728 DEBUG nova.compute.provider_tree [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.422 233728 DEBUG nova.scheduler.client.report [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.446 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.447 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.492 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.492 233728 DEBUG nova.network.neutron [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.509 233728 INFO nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.562 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.784 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.785 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.785 233728 INFO nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Creating image(s)#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.811 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.842 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:24.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.872 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.876 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.939 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.940 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.941 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.941 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.965 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.969 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d14e9535-21d2-42b8-9c24-d37708970336_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:24 np0005539552 nova_compute[233724]: 2025-11-29 08:06:24.993 233728 DEBUG nova.policy [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fddc5f5801764ee19d5253e2cab34df3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '638fd52fccf14f16b56d0860553063f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.245 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d14e9535-21d2-42b8-9c24-d37708970336_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.316 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] resizing rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.422 233728 DEBUG nova.objects.instance [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'migration_context' on Instance uuid d14e9535-21d2-42b8-9c24-d37708970336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.437 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.438 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Ensure instance console log exists: /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.439 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.440 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.440 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.700 233728 DEBUG nova.network.neutron [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Successfully created port: 3a011f2d-43f9-4359-a063-93f7f4f9d745 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:25 np0005539552 nova_compute[233724]: 2025-11-29 08:06:25.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 03:06:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:26.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.029 233728 DEBUG nova.network.neutron [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Successfully updated port: 3a011f2d-43f9-4359-a063-93f7f4f9d745 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.048 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "refresh_cache-d14e9535-21d2-42b8-9c24-d37708970336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.049 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquired lock "refresh_cache-d14e9535-21d2-42b8-9c24-d37708970336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.049 233728 DEBUG nova.network.neutron [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.145 233728 DEBUG nova.compute.manager [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-changed-3a011f2d-43f9-4359-a063-93f7f4f9d745 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.146 233728 DEBUG nova.compute.manager [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Refreshing instance network info cache due to event network-changed-3a011f2d-43f9-4359-a063-93f7f4f9d745. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.146 233728 DEBUG oslo_concurrency.lockutils [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d14e9535-21d2-42b8-9c24-d37708970336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.226 233728 DEBUG nova.network.neutron [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.791 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403572.7887177, 68ba4333-b460-437d-97a0-6c7feff2c4bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.792 233728 INFO nova.compute.manager [-] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.817 233728 DEBUG nova.compute.manager [None req-afda443e-4a32-4530-90cc-7188a7e74f15 - - - - - -] [instance: 68ba4333-b460-437d-97a0-6c7feff2c4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:27 np0005539552 nova_compute[233724]: 2025-11-29 08:06:27.821 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.583 233728 DEBUG nova.network.neutron [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Updating instance_info_cache with network_info: [{"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.603 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Releasing lock "refresh_cache-d14e9535-21d2-42b8-9c24-d37708970336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.603 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Instance network_info: |[{"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.603 233728 DEBUG oslo_concurrency.lockutils [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d14e9535-21d2-42b8-9c24-d37708970336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.604 233728 DEBUG nova.network.neutron [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Refreshing network info cache for port 3a011f2d-43f9-4359-a063-93f7f4f9d745 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.606 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Start _get_guest_xml network_info=[{"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.611 233728 WARNING nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.614 233728 DEBUG nova.virt.libvirt.host [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.615 233728 DEBUG nova.virt.libvirt.host [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.617 233728 DEBUG nova.virt.libvirt.host [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.617 233728 DEBUG nova.virt.libvirt.host [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.618 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.618 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.619 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.619 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.619 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.620 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.620 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.620 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.620 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.620 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.621 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.621 233728 DEBUG nova.virt.hardware [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:28 np0005539552 nova_compute[233724]: 2025-11-29 08:06:28.624 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/620731834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.056 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.082 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.086 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3968110607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.544 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.547 233728 DEBUG nova.virt.libvirt.vif [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1280826612',display_name='tempest-ImagesTestJSON-server-1280826612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-1280826612',id=64,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-r6kqgf6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:24Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d14e9535-21d2-42b8-9c24-d37708970336,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.547 233728 DEBUG nova.network.os_vif_util [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.549 233728 DEBUG nova.network.os_vif_util [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.550 233728 DEBUG nova.objects.instance [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d14e9535-21d2-42b8-9c24-d37708970336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.570 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <uuid>d14e9535-21d2-42b8-9c24-d37708970336</uuid>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <name>instance-00000040</name>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:name>tempest-ImagesTestJSON-server-1280826612</nova:name>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:06:28</nova:creationTime>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:user uuid="fddc5f5801764ee19d5253e2cab34df3">tempest-ImagesTestJSON-1682881466-project-member</nova:user>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:project uuid="638fd52fccf14f16b56d0860553063f3">tempest-ImagesTestJSON-1682881466</nova:project>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <nova:port uuid="3a011f2d-43f9-4359-a063-93f7f4f9d745">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <entry name="serial">d14e9535-21d2-42b8-9c24-d37708970336</entry>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <entry name="uuid">d14e9535-21d2-42b8-9c24-d37708970336</entry>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d14e9535-21d2-42b8-9c24-d37708970336_disk">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d14e9535-21d2-42b8-9c24-d37708970336_disk.config">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:ba:42:53"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <target dev="tap3a011f2d-43"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/console.log" append="off"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:06:29 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:06:29 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:06:29 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:06:29 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.571 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Preparing to wait for external event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.571 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.572 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.572 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.573 233728 DEBUG nova.virt.libvirt.vif [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1280826612',display_name='tempest-ImagesTestJSON-server-1280826612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-1280826612',id=64,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-r6kqgf6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:24Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d14e9535-21d2-42b8-9c24-d37708970336,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.573 233728 DEBUG nova.network.os_vif_util [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.573 233728 DEBUG nova.network.os_vif_util [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.574 233728 DEBUG os_vif [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.574 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.575 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.575 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.579 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.579 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a011f2d-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.579 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a011f2d-43, col_values=(('external_ids', {'iface-id': '3a011f2d-43f9-4359-a063-93f7f4f9d745', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:42:53', 'vm-uuid': 'd14e9535-21d2-42b8-9c24-d37708970336'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.581 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539552 NetworkManager[48926]: <info>  [1764403589.5820] manager: (tap3a011f2d-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.584 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.587 233728 INFO os_vif [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43')#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.643 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.643 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.643 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No VIF found with MAC fa:16:3e:ba:42:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.644 233728 INFO nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Using config drive#033[00m
Nov 29 03:06:29 np0005539552 nova_compute[233724]: 2025-11-29 08:06:29.666 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.069 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.574 233728 INFO nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Creating config drive at /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/disk.config#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.579 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxnnnc9_y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.712 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxnnnc9_y" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.737 233728 DEBUG nova.storage.rbd_utils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] rbd image d14e9535-21d2-42b8-9c24-d37708970336_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.742 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/disk.config d14e9535-21d2-42b8-9c24-d37708970336_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.893 233728 DEBUG oslo_concurrency.processutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/disk.config d14e9535-21d2-42b8-9c24-d37708970336_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.894 233728 INFO nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Deleting local config drive /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:30 np0005539552 kernel: tap3a011f2d-43: entered promiscuous mode
Nov 29 03:06:30 np0005539552 NetworkManager[48926]: <info>  [1764403590.9380] manager: (tap3a011f2d-43): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 03:06:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:30Z|00190|binding|INFO|Claiming lport 3a011f2d-43f9-4359-a063-93f7f4f9d745 for this chassis.
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:30Z|00191|binding|INFO|3a011f2d-43f9-4359-a063-93f7f4f9d745: Claiming fa:16:3e:ba:42:53 10.100.0.7
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.960 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:30Z|00192|binding|INFO|Setting lport 3a011f2d-43f9-4359-a063-93f7f4f9d745 ovn-installed in OVS
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.963 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539552 nova_compute[233724]: 2025-11-29 08:06:30.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539552 systemd-udevd[259448]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:30 np0005539552 systemd-machined[196379]: New machine qemu-23-instance-00000040.
Nov 29 03:06:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:30Z|00193|binding|INFO|Setting lport 3a011f2d-43f9-4359-a063-93f7f4f9d745 up in Southbound
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.972 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:42:53 10.100.0.7'], port_security=['fa:16:3e:ba:42:53 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd14e9535-21d2-42b8-9c24-d37708970336', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3a011f2d-43f9-4359-a063-93f7f4f9d745) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.974 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3a011f2d-43f9-4359-a063-93f7f4f9d745 in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 bound to our chassis#033[00m
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.975 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f01d29c1-afcb-4909-9abf-f7d31e4549d8#033[00m
Nov 29 03:06:30 np0005539552 systemd[1]: Started Virtual Machine qemu-23-instance-00000040.
Nov 29 03:06:30 np0005539552 NetworkManager[48926]: <info>  [1764403590.9801] device (tap3a011f2d-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:30 np0005539552 NetworkManager[48926]: <info>  [1764403590.9809] device (tap3a011f2d-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.988 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c70f7e55-df37-4aab-b3e0-aae11088f1f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.989 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf01d29c1-a1 in ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.990 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf01d29c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.990 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff68449-8f3c-4114-bbf7-61f089798ef7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:30.991 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5eb5117-2ca5-4832-aba4-617cb0dc6798]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.002 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[92026dac-2ff2-4304-9f03-9f80af67f580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.025 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f37c3fcd-ccfe-46be-a33c-4c1319b61a74]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.054 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[edb9150d-a36b-4c14-9934-d8adeac644c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 NetworkManager[48926]: <info>  [1764403591.0619] manager: (tapf01d29c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.061 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3bba171f-b244-4111-8614-d3d1d96148af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.088 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b141fbe3-57f2-4a87-8b59-e43ef3b9eacf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.091 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1e724d5a-4c22-4245-9a37-68d4b8ae587c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 NetworkManager[48926]: <info>  [1764403591.1106] device (tapf01d29c1-a0): carrier: link connected
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.115 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[54454099-76cf-45bf-a8a5-32dff4e0e8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.132 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c035aed-1e0c-47fe-b4a7-e06c2686b246]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663679, 'reachable_time': 26791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259481, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.146 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0d52894e-ae27-4abf-9494-2014eb55a24b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:77b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663679, 'tstamp': 663679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259482, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.160 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f4de6f18-c184-4e81-a661-244ceb144702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf01d29c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:77:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663679, 'reachable_time': 26791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259483, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.188 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2a34c516-f612-4092-9114-8a9f364d7544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.255 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a6891163-0bda-44f6-bbd6-a5efc862286d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.257 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.258 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.258 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01d29c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:31 np0005539552 kernel: tapf01d29c1-a0: entered promiscuous mode
Nov 29 03:06:31 np0005539552 NetworkManager[48926]: <info>  [1764403591.3088] manager: (tapf01d29c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.308 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.313 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf01d29c1-a0, col_values=(('external_ids', {'iface-id': '2247adf2-4048-41de-ba3c-ac69d728838f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.314 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:31Z|00194|binding|INFO|Releasing lport 2247adf2-4048-41de-ba3c-ac69d728838f from this chassis (sb_readonly=0)
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.315 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.317 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.318 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[014a0fc3-570f-4e3a-b7ec-617c7fa733d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.318 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/f01d29c1-afcb-4909-9abf-f7d31e4549d8.pid.haproxy
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID f01d29c1-afcb-4909-9abf-f7d31e4549d8
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:31.319 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'env', 'PROCESS_TAG=haproxy-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f01d29c1-afcb-4909-9abf-f7d31e4549d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.330 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.400 233728 DEBUG nova.network.neutron [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Updated VIF entry in instance network info cache for port 3a011f2d-43f9-4359-a063-93f7f4f9d745. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.400 233728 DEBUG nova.network.neutron [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Updating instance_info_cache with network_info: [{"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.443 233728 DEBUG oslo_concurrency.lockutils [req-721bc250-20a6-4dd9-b0d0-98aa1ebf2009 req-f18c0cd3-8b8a-4cd3-b2dd-7ea1cbbc5562 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d14e9535-21d2-42b8-9c24-d37708970336" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.592 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403591.5919876, d14e9535-21d2-42b8-9c24-d37708970336 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.592 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.612 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.616 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403591.5921533, d14e9535-21d2-42b8-9c24-d37708970336 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.617 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.641 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.644 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:31 np0005539552 nova_compute[233724]: 2025-11-29 08:06:31.671 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:31 np0005539552 podman[259607]: 2025-11-29 08:06:31.679838573 +0000 UTC m=+0.049571419 container create 3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:06:31 np0005539552 systemd[1]: Started libpod-conmon-3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c.scope.
Nov 29 03:06:31 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:06:31 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a191a0e607718b2a74c09016b21eb6a350f5f24f77b00c2256d0970ede46d36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:31 np0005539552 podman[259607]: 2025-11-29 08:06:31.652320131 +0000 UTC m=+0.022052997 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:31 np0005539552 podman[259607]: 2025-11-29 08:06:31.755306721 +0000 UTC m=+0.125039587 container init 3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:06:31 np0005539552 podman[259607]: 2025-11-29 08:06:31.76046532 +0000 UTC m=+0.130198176 container start 3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:06:31 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [NOTICE]   (259626) : New worker (259628) forked
Nov 29 03:06:31 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [NOTICE]   (259626) : Loading success.
Nov 29 03:06:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:32.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.715 233728 DEBUG nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.715 233728 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.715 233728 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.716 233728 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.716 233728 DEBUG nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Processing event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.716 233728 DEBUG nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.716 233728 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.716 233728 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.717 233728 DEBUG oslo_concurrency.lockutils [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.717 233728 DEBUG nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] No waiting events found dispatching network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.717 233728 WARNING nova.compute.manager [req-13e38026-bc3b-4bf8-998d-c1dfa91724c3 req-0b2941c8-7f21-47c4-afba-69d0db741a29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received unexpected event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.717 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.722 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403593.7227747, d14e9535-21d2-42b8-9c24-d37708970336 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.723 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.725 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.727 233728 INFO nova.virt.libvirt.driver [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Instance spawned successfully.#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.729 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.752 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.758 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.761 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.761 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.762 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.762 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.762 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.763 233728 DEBUG nova.virt.libvirt.driver [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.793 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.889 233728 INFO nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Took 9.11 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.890 233728 DEBUG nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:33 np0005539552 nova_compute[233724]: 2025-11-29 08:06:33.996 233728 INFO nova.compute.manager [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Took 10.18 seconds to build instance.#033[00m
Nov 29 03:06:34 np0005539552 nova_compute[233724]: 2025-11-29 08:06:34.023 233728 DEBUG oslo_concurrency.lockutils [None req-4ce2932a-ff66-47ca-aaa0-efc757ddf8da fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:34 np0005539552 nova_compute[233724]: 2025-11-29 08:06:34.621 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:35 np0005539552 nova_compute[233724]: 2025-11-29 08:06:35.071 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:06:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:36.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 03:06:36 np0005539552 nova_compute[233724]: 2025-11-29 08:06:36.818 233728 DEBUG nova.compute.manager [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:36.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:36 np0005539552 nova_compute[233724]: 2025-11-29 08:06:36.882 233728 INFO nova.compute.manager [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] instance snapshotting#033[00m
Nov 29 03:06:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:37 np0005539552 nova_compute[233724]: 2025-11-29 08:06:37.306 233728 INFO nova.virt.libvirt.driver [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Beginning live snapshot process#033[00m
Nov 29 03:06:37 np0005539552 nova_compute[233724]: 2025-11-29 08:06:37.510 233728 DEBUG nova.virt.libvirt.imagebackend [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:06:37 np0005539552 nova_compute[233724]: 2025-11-29 08:06:37.855 233728 DEBUG nova.storage.rbd_utils [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(6309381bdadb4fbc870e9bd2fc7060c5) on rbd image(d14e9535-21d2-42b8-9c24-d37708970336_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:06:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 03:06:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:38 np0005539552 nova_compute[233724]: 2025-11-29 08:06:38.650 233728 DEBUG nova.storage.rbd_utils [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] cloning vms/d14e9535-21d2-42b8-9c24-d37708970336_disk@6309381bdadb4fbc870e9bd2fc7060c5 to images/e2197e65-1840-49d9-98ce-4e37b9f34409 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:06:38 np0005539552 nova_compute[233724]: 2025-11-29 08:06:38.777 233728 DEBUG nova.storage.rbd_utils [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] flattening images/e2197e65-1840-49d9-98ce-4e37b9f34409 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:06:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:38.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:39 np0005539552 nova_compute[233724]: 2025-11-29 08:06:39.053 233728 DEBUG nova.storage.rbd_utils [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] removing snapshot(6309381bdadb4fbc870e9bd2fc7060c5) on rbd image(d14e9535-21d2-42b8-9c24-d37708970336_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:06:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:06:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3615023253' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:06:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:06:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3615023253' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:06:39 np0005539552 nova_compute[233724]: 2025-11-29 08:06:39.627 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 03:06:39 np0005539552 nova_compute[233724]: 2025-11-29 08:06:39.677 233728 DEBUG nova.storage.rbd_utils [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] creating snapshot(snap) on rbd image(e2197e65-1840-49d9-98ce-4e37b9f34409) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:06:40 np0005539552 nova_compute[233724]: 2025-11-29 08:06:40.073 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 03:06:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image e2197e65-1840-49d9-98ce-4e37b9f34409 could not be found.
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID e2197e65-1840-49d9-98ce-4e37b9f34409
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver 
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver 
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image e2197e65-1840-49d9-98ce-4e37b9f34409 could not be found.
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.112 233728 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 03:06:41 np0005539552 nova_compute[233724]: 2025-11-29 08:06:41.174 233728 DEBUG nova.storage.rbd_utils [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] removing snapshot(snap) on rbd image(e2197e65-1840-49d9-98ce-4e37b9f34409) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:06:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 03:06:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:42 np0005539552 nova_compute[233724]: 2025-11-29 08:06:42.789 233728 WARNING nova.compute.manager [None req-0d6e2ad8-392e-40c5-a842-57bd483f3199 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Image not found during snapshot: nova.exception.ImageNotFound: Image e2197e65-1840-49d9-98ce-4e37b9f34409 could not be found.#033[00m
Nov 29 03:06:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:42.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.631 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:44.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.846 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.847 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.848 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.848 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.849 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.850 233728 INFO nova.compute.manager [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Terminating instance#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.851 233728 DEBUG nova.compute.manager [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:44.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:44 np0005539552 kernel: tap3a011f2d-43 (unregistering): left promiscuous mode
Nov 29 03:06:44 np0005539552 NetworkManager[48926]: <info>  [1764403604.8951] device (tap3a011f2d-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:44Z|00195|binding|INFO|Releasing lport 3a011f2d-43f9-4359-a063-93f7f4f9d745 from this chassis (sb_readonly=0)
Nov 29 03:06:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:44Z|00196|binding|INFO|Setting lport 3a011f2d-43f9-4359-a063-93f7f4f9d745 down in Southbound
Nov 29 03:06:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:06:44Z|00197|binding|INFO|Removing iface tap3a011f2d-43 ovn-installed in OVS
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.939 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.941 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:44.949 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:42:53 10.100.0.7'], port_security=['fa:16:3e:ba:42:53 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd14e9535-21d2-42b8-9c24-d37708970336', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '638fd52fccf14f16b56d0860553063f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57b53e5-9055-46ae-8ab4-d4a8a62173cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d89b288b-bbc6-47fa-ad12-8aab94ffc78f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3a011f2d-43f9-4359-a063-93f7f4f9d745) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:44.951 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3a011f2d-43f9-4359-a063-93f7f4f9d745 in datapath f01d29c1-afcb-4909-9abf-f7d31e4549d8 unbound from our chassis#033[00m
Nov 29 03:06:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:44.953 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f01d29c1-afcb-4909-9abf-f7d31e4549d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:44.954 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad9e02e-ca97-42cf-bb9c-7210e745a5d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:44.955 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 namespace which is not needed anymore#033[00m
Nov 29 03:06:44 np0005539552 nova_compute[233724]: 2025-11-29 08:06:44.963 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539552 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 29 03:06:44 np0005539552 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000040.scope: Consumed 11.781s CPU time.
Nov 29 03:06:44 np0005539552 systemd-machined[196379]: Machine qemu-23-instance-00000040 terminated.
Nov 29 03:06:45 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [NOTICE]   (259626) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:45 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [NOTICE]   (259626) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:45 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [WARNING]  (259626) : Exiting Master process...
Nov 29 03:06:45 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [ALERT]    (259626) : Current worker (259628) exited with code 143 (Terminated)
Nov 29 03:06:45 np0005539552 neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8[259622]: [WARNING]  (259626) : All workers exited. Exiting... (0)
Nov 29 03:06:45 np0005539552 systemd[1]: libpod-3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c.scope: Deactivated successfully.
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539552 podman[259850]: 2025-11-29 08:06:45.081665001 +0000 UTC m=+0.043846214 container died 3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.084 233728 INFO nova.virt.libvirt.driver [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Instance destroyed successfully.#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.085 233728 DEBUG nova.objects.instance [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lazy-loading 'resources' on Instance uuid d14e9535-21d2-42b8-9c24-d37708970336 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.104 233728 DEBUG nova.virt.libvirt.vif [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1280826612',display_name='tempest-ImagesTestJSON-server-1280826612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-1280826612',id=64,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='638fd52fccf14f16b56d0860553063f3',ramdisk_id='',reservation_id='r-r6kqgf6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1682881466',owner_user_name='tempest-ImagesTestJSON-1682881466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:42Z,user_data=None,user_id='fddc5f5801764ee19d5253e2cab34df3',uuid=d14e9535-21d2-42b8-9c24-d37708970336,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.106 233728 DEBUG nova.network.os_vif_util [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converting VIF {"id": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "address": "fa:16:3e:ba:42:53", "network": {"id": "f01d29c1-afcb-4909-9abf-f7d31e4549d8", "bridge": "br-int", "label": "tempest-ImagesTestJSON-2001288652-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "638fd52fccf14f16b56d0860553063f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a011f2d-43", "ovs_interfaceid": "3a011f2d-43f9-4359-a063-93f7f4f9d745", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.107 233728 DEBUG nova.network.os_vif_util [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.107 233728 DEBUG os_vif [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.110 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.110 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a011f2d-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.111 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.114 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay-1a191a0e607718b2a74c09016b21eb6a350f5f24f77b00c2256d0970ede46d36-merged.mount: Deactivated successfully.
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.116 233728 INFO os_vif [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:42:53,bridge_name='br-int',has_traffic_filtering=True,id=3a011f2d-43f9-4359-a063-93f7f4f9d745,network=Network(f01d29c1-afcb-4909-9abf-f7d31e4549d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a011f2d-43')#033[00m
Nov 29 03:06:45 np0005539552 podman[259850]: 2025-11-29 08:06:45.124339943 +0000 UTC m=+0.086521156 container cleanup 3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:45 np0005539552 systemd[1]: libpod-conmon-3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c.scope: Deactivated successfully.
Nov 29 03:06:45 np0005539552 podman[259907]: 2025-11-29 08:06:45.184774334 +0000 UTC m=+0.040180005 container remove 3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.191 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[35491bd0-bd2a-4601-8c2d-f668ca546087]: (4, ('Sat Nov 29 08:06:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c)\n3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c\nSat Nov 29 08:06:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 (3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c)\n3a739deed9cfc8fddc0ee79f4cd63f0ad4efdf3d0afa0be7ff9af3b22fb1a06c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.192 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[360e0bc9-9a89-461f-8352-86edc4b32b0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.193 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01d29c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:45 np0005539552 kernel: tapf01d29c1-a0: left promiscuous mode
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.195 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.211 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.213 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c08f766-0822-4f33-87e1-7f57c67bdb9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.227 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d64cc4f4-d030-4c64-96a3-3ff1531e9edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.227 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae43e4a-bb9d-4c70-870c-5d3e6b62f070]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.244 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[29037fd4-f3b9-4959-a215-0d5ab1de3298]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663673, 'reachable_time': 44467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259925, 'error': None, 'target': 'ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.247 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f01d29c1-afcb-4909-9abf-f7d31e4549d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:45 np0005539552 systemd[1]: run-netns-ovnmeta\x2df01d29c1\x2dafcb\x2d4909\x2d9abf\x2df7d31e4549d8.mount: Deactivated successfully.
Nov 29 03:06:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:06:45.248 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b99a070e-fd26-4344-a9c8-f35bab8d08ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.255 233728 DEBUG nova.compute.manager [req-0282f150-e7f3-4eab-adc5-2a00c18c5d8d req-ba60da0e-339f-499d-9a89-254e8cf6a446 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-vif-unplugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.255 233728 DEBUG oslo_concurrency.lockutils [req-0282f150-e7f3-4eab-adc5-2a00c18c5d8d req-ba60da0e-339f-499d-9a89-254e8cf6a446 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.255 233728 DEBUG oslo_concurrency.lockutils [req-0282f150-e7f3-4eab-adc5-2a00c18c5d8d req-ba60da0e-339f-499d-9a89-254e8cf6a446 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.256 233728 DEBUG oslo_concurrency.lockutils [req-0282f150-e7f3-4eab-adc5-2a00c18c5d8d req-ba60da0e-339f-499d-9a89-254e8cf6a446 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.256 233728 DEBUG nova.compute.manager [req-0282f150-e7f3-4eab-adc5-2a00c18c5d8d req-ba60da0e-339f-499d-9a89-254e8cf6a446 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] No waiting events found dispatching network-vif-unplugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:45 np0005539552 nova_compute[233724]: 2025-11-29 08:06:45.256 233728 DEBUG nova.compute.manager [req-0282f150-e7f3-4eab-adc5-2a00c18c5d8d req-ba60da0e-339f-499d-9a89-254e8cf6a446 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-vif-unplugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 03:06:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:46.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:46 np0005539552 podman[259928]: 2025-11-29 08:06:46.979860713 +0000 UTC m=+0.061476871 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:47 np0005539552 podman[259929]: 2025-11-29 08:06:47.003853051 +0000 UTC m=+0.084753359 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:06:47 np0005539552 podman[259930]: 2025-11-29 08:06:47.032588906 +0000 UTC m=+0.110021061 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:06:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.404 233728 DEBUG nova.compute.manager [req-a690666a-c556-4efe-bb0d-0e919f098820 req-b51be221-e120-4bd0-b51a-0810ef14f249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.404 233728 DEBUG oslo_concurrency.lockutils [req-a690666a-c556-4efe-bb0d-0e919f098820 req-b51be221-e120-4bd0-b51a-0810ef14f249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d14e9535-21d2-42b8-9c24-d37708970336-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.405 233728 DEBUG oslo_concurrency.lockutils [req-a690666a-c556-4efe-bb0d-0e919f098820 req-b51be221-e120-4bd0-b51a-0810ef14f249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.405 233728 DEBUG oslo_concurrency.lockutils [req-a690666a-c556-4efe-bb0d-0e919f098820 req-b51be221-e120-4bd0-b51a-0810ef14f249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.405 233728 DEBUG nova.compute.manager [req-a690666a-c556-4efe-bb0d-0e919f098820 req-b51be221-e120-4bd0-b51a-0810ef14f249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] No waiting events found dispatching network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.405 233728 WARNING nova.compute.manager [req-a690666a-c556-4efe-bb0d-0e919f098820 req-b51be221-e120-4bd0-b51a-0810ef14f249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received unexpected event network-vif-plugged-3a011f2d-43f9-4359-a063-93f7f4f9d745 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.654 233728 INFO nova.virt.libvirt.driver [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Deleting instance files /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336_del#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.654 233728 INFO nova.virt.libvirt.driver [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Deletion of /var/lib/nova/instances/d14e9535-21d2-42b8-9c24-d37708970336_del complete#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.745 233728 INFO nova.compute.manager [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Took 2.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.746 233728 DEBUG oslo.service.loopingcall [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.746 233728 DEBUG nova.compute.manager [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:47 np0005539552 nova_compute[233724]: 2025-11-29 08:06:47.746 233728 DEBUG nova.network.neutron [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:48.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:48.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:50 np0005539552 nova_compute[233724]: 2025-11-29 08:06:50.138 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 03:06:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:52 np0005539552 nova_compute[233724]: 2025-11-29 08:06:52.387 233728 DEBUG nova.network.neutron [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:52 np0005539552 nova_compute[233724]: 2025-11-29 08:06:52.609 233728 INFO nova.compute.manager [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Took 4.86 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:52 np0005539552 nova_compute[233724]: 2025-11-29 08:06:52.693 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:52 np0005539552 nova_compute[233724]: 2025-11-29 08:06:52.694 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:52 np0005539552 nova_compute[233724]: 2025-11-29 08:06:52.797 233728 DEBUG oslo_concurrency.processutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:52.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.042 233728 DEBUG nova.compute.manager [req-7ac4563c-965a-4f93-95b3-3d75851a8b71 req-1ca7029b-7af6-47ba-8356-d029c08b9794 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Received event network-vif-deleted-3a011f2d-43f9-4359-a063-93f7f4f9d745 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2790593566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.232 233728 DEBUG oslo_concurrency.processutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.237 233728 DEBUG nova.compute.provider_tree [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.259 233728 DEBUG nova.scheduler.client.report [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.290 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.317 233728 INFO nova.scheduler.client.report [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Deleted allocations for instance d14e9535-21d2-42b8-9c24-d37708970336#033[00m
Nov 29 03:06:53 np0005539552 nova_compute[233724]: 2025-11-29 08:06:53.399 233728 DEBUG oslo_concurrency.lockutils [None req-54ef5e91-bcae-49e2-8440-bb1a09769ef3 fddc5f5801764ee19d5253e2cab34df3 638fd52fccf14f16b56d0860553063f3 - - default default] Lock "d14e9535-21d2-42b8-9c24-d37708970336" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:54.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:54.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:55 np0005539552 nova_compute[233724]: 2025-11-29 08:06:55.139 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:55 np0005539552 nova_compute[233724]: 2025-11-29 08:06:55.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:55 np0005539552 nova_compute[233724]: 2025-11-29 08:06:55.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:06:55 np0005539552 nova_compute[233724]: 2025-11-29 08:06:55.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:06:55 np0005539552 nova_compute[233724]: 2025-11-29 08:06:55.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:06:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:06:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:58.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.082 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403605.0811381, d14e9535-21d2-42b8-9c24-d37708970336 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.082 233728 INFO nova.compute.manager [-] [instance: d14e9535-21d2-42b8-9c24-d37708970336] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.143 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.145 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.191 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.192 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:07:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:00 np0005539552 nova_compute[233724]: 2025-11-29 08:07:00.895 233728 DEBUG nova.compute.manager [None req-585a8daa-3052-4022-8028-e9667ec3e5a7 - - - - - -] [instance: d14e9535-21d2-42b8-9c24-d37708970336] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:02.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:04.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:05 np0005539552 nova_compute[233724]: 2025-11-29 08:07:05.192 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:06.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:07 np0005539552 nova_compute[233724]: 2025-11-29 08:07:07.233 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:07.569 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:07 np0005539552 nova_compute[233724]: 2025-11-29 08:07:07.569 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:07.570 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:07:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:09.572 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:10 np0005539552 nova_compute[233724]: 2025-11-29 08:07:10.195 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:10.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:14.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:15 np0005539552 nova_compute[233724]: 2025-11-29 08:07:15.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:15 np0005539552 nova_compute[233724]: 2025-11-29 08:07:15.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:15 np0005539552 nova_compute[233724]: 2025-11-29 08:07:15.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:07:15 np0005539552 nova_compute[233724]: 2025-11-29 08:07:15.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:07:15 np0005539552 nova_compute[233724]: 2025-11-29 08:07:15.199 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:07:15 np0005539552 nova_compute[233724]: 2025-11-29 08:07:15.200 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:17 np0005539552 podman[260430]: 2025-11-29 08:07:17.963224198 +0000 UTC m=+0.056314762 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 03:07:17 np0005539552 podman[260431]: 2025-11-29 08:07:17.963735271 +0000 UTC m=+0.054902813 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:07:17 np0005539552 podman[260432]: 2025-11-29 08:07:17.988698055 +0000 UTC m=+0.076535277 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:07:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:18 np0005539552 nova_compute[233724]: 2025-11-29 08:07:18.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:18 np0005539552 nova_compute[233724]: 2025-11-29 08:07:18.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.000 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.000 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.001 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.001 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.001 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3774735306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.455 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.619 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.621 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4620MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.621 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.621 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.719 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.720 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:07:19 np0005539552 nova_compute[233724]: 2025-11-29 08:07:19.750 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/915451856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:20 np0005539552 nova_compute[233724]: 2025-11-29 08:07:20.177 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:20 np0005539552 nova_compute[233724]: 2025-11-29 08:07:20.184 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:20 np0005539552 nova_compute[233724]: 2025-11-29 08:07:20.199 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:20 np0005539552 nova_compute[233724]: 2025-11-29 08:07:20.207 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:20 np0005539552 nova_compute[233724]: 2025-11-29 08:07:20.239 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:07:20 np0005539552 nova_compute[233724]: 2025-11-29 08:07:20.239 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:20.615 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:20.616 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:20.617 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:20.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:21 np0005539552 nova_compute[233724]: 2025-11-29 08:07:21.239 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539552 nova_compute[233724]: 2025-11-29 08:07:21.240 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:07:21 np0005539552 nova_compute[233724]: 2025-11-29 08:07:21.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539552 nova_compute[233724]: 2025-11-29 08:07:21.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539552 nova_compute[233724]: 2025-11-29 08:07:21.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539552 nova_compute[233724]: 2025-11-29 08:07:21.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.217 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.218 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.237 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.294 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.294 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.300 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.300 233728 INFO nova.compute.claims [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.394 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:07:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:07:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1578810441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.828 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.834 233728 DEBUG nova.compute.provider_tree [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.849 233728 DEBUG nova.scheduler.client.report [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.872 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.873 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.925 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.925 233728 DEBUG nova.network.neutron [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.928 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.929 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.929 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.951 233728 INFO nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.971 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.971 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.974 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:24 np0005539552 nova_compute[233724]: 2025-11-29 08:07:24.976 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.293 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.294 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.294 233728 INFO nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Creating image(s)#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.319 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.344 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.371 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.375 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.439 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.440 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.440 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.441 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.466 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.471 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:25 np0005539552 nova_compute[233724]: 2025-11-29 08:07:25.495 233728 DEBUG nova.policy [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a010a95085342c5ae9a02f15b334fad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c57433fd3834430904b1908f24f3f2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:26 np0005539552 nova_compute[233724]: 2025-11-29 08:07:26.237 233728 DEBUG nova.network.neutron [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Successfully created port: 0dfa3388-927a-4252-a76a-6599caf67253 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:26.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:26 np0005539552 nova_compute[233724]: 2025-11-29 08:07:26.883 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:26 np0005539552 nova_compute[233724]: 2025-11-29 08:07:26.946 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] resizing rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.052 233728 DEBUG nova.objects.instance [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'migration_context' on Instance uuid ed7a6200-c4d2-4554-bdb4-57e02fa79386 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.133 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.134 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Ensure instance console log exists: /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.134 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.134 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.135 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.915 233728 DEBUG nova.network.neutron [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Successfully updated port: 0dfa3388-927a-4252-a76a-6599caf67253 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.929 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "refresh_cache-ed7a6200-c4d2-4554-bdb4-57e02fa79386" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.930 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquired lock "refresh_cache-ed7a6200-c4d2-4554-bdb4-57e02fa79386" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:27 np0005539552 nova_compute[233724]: 2025-11-29 08:07:27.930 233728 DEBUG nova.network.neutron [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:28 np0005539552 nova_compute[233724]: 2025-11-29 08:07:28.044 233728 DEBUG nova.compute.manager [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-changed-0dfa3388-927a-4252-a76a-6599caf67253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:28 np0005539552 nova_compute[233724]: 2025-11-29 08:07:28.045 233728 DEBUG nova.compute.manager [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Refreshing instance network info cache due to event network-changed-0dfa3388-927a-4252-a76a-6599caf67253. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:28 np0005539552 nova_compute[233724]: 2025-11-29 08:07:28.045 233728 DEBUG oslo_concurrency.lockutils [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ed7a6200-c4d2-4554-bdb4-57e02fa79386" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:28 np0005539552 nova_compute[233724]: 2025-11-29 08:07:28.137 233728 DEBUG nova.network.neutron [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:28.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:28.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.295 233728 DEBUG nova.network.neutron [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Updating instance_info_cache with network_info: [{"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.337 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Releasing lock "refresh_cache-ed7a6200-c4d2-4554-bdb4-57e02fa79386" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.337 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Instance network_info: |[{"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.338 233728 DEBUG oslo_concurrency.lockutils [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ed7a6200-c4d2-4554-bdb4-57e02fa79386" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.338 233728 DEBUG nova.network.neutron [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Refreshing network info cache for port 0dfa3388-927a-4252-a76a-6599caf67253 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.341 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Start _get_guest_xml network_info=[{"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.345 233728 WARNING nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.348 233728 DEBUG nova.virt.libvirt.host [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.349 233728 DEBUG nova.virt.libvirt.host [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.352 233728 DEBUG nova.virt.libvirt.host [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.352 233728 DEBUG nova.virt.libvirt.host [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.354 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.354 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.354 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.355 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.355 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.355 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.356 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.356 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.356 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.356 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.357 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.357 233728 DEBUG nova.virt.hardware [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.360 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2884695933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.828 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.853 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:29 np0005539552 nova_compute[233724]: 2025-11-29 08:07:29.857 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.203 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3521900567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.283 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.285 233728 DEBUG nova.virt.libvirt.vif [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1393911926',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1393911926',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1393911926',id=68,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-mo4931n3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:25Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=ed7a6200-c4d2-4554-bdb4-57e02fa79386,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.285 233728 DEBUG nova.network.os_vif_util [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.286 233728 DEBUG nova.network.os_vif_util [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.288 233728 DEBUG nova.objects.instance [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'pci_devices' on Instance uuid ed7a6200-c4d2-4554-bdb4-57e02fa79386 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.308 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <uuid>ed7a6200-c4d2-4554-bdb4-57e02fa79386</uuid>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <name>instance-00000044</name>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1393911926</nova:name>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:07:29</nova:creationTime>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:user uuid="1a010a95085342c5ae9a02f15b334fad">tempest-ImagesOneServerNegativeTestJSON-167104479-project-member</nova:user>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:project uuid="5c57433fd3834430904b1908f24f3f2f">tempest-ImagesOneServerNegativeTestJSON-167104479</nova:project>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <nova:port uuid="0dfa3388-927a-4252-a76a-6599caf67253">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <entry name="serial">ed7a6200-c4d2-4554-bdb4-57e02fa79386</entry>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <entry name="uuid">ed7a6200-c4d2-4554-bdb4-57e02fa79386</entry>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk.config">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:e6:0d:01"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <target dev="tap0dfa3388-92"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/console.log" append="off"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:07:30 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:07:30 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:07:30 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:07:30 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.310 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Preparing to wait for external event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.310 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.311 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.311 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.311 233728 DEBUG nova.virt.libvirt.vif [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1393911926',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1393911926',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1393911926',id=68,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-mo4931n3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:25Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=ed7a6200-c4d2-4554-bdb4-57e02fa79386,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.312 233728 DEBUG nova.network.os_vif_util [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.312 233728 DEBUG nova.network.os_vif_util [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.313 233728 DEBUG os_vif [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.313 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.314 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.314 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.317 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.317 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0dfa3388-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.317 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0dfa3388-92, col_values=(('external_ids', {'iface-id': '0dfa3388-927a-4252-a76a-6599caf67253', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:0d:01', 'vm-uuid': 'ed7a6200-c4d2-4554-bdb4-57e02fa79386'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.318 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539552 NetworkManager[48926]: <info>  [1764403650.3197] manager: (tap0dfa3388-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.322 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.324 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.325 233728 INFO os_vif [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92')#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.398 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.399 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.399 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No VIF found with MAC fa:16:3e:e6:0d:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.399 233728 INFO nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Using config drive#033[00m
Nov 29 03:07:30 np0005539552 nova_compute[233724]: 2025-11-29 08:07:30.420 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.441 233728 INFO nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Creating config drive at /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/disk.config#033[00m
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.447 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiiwe8xlr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.581 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiiwe8xlr" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.606 233728 DEBUG nova.storage.rbd_utils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.609 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/disk.config ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.772 233728 DEBUG oslo_concurrency.processutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/disk.config ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.774 233728 INFO nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Deleting local config drive /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:32 np0005539552 kernel: tap0dfa3388-92: entered promiscuous mode
Nov 29 03:07:32 np0005539552 NetworkManager[48926]: <info>  [1764403652.8373] manager: (tap0dfa3388-92): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.838 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:32Z|00198|binding|INFO|Claiming lport 0dfa3388-927a-4252-a76a-6599caf67253 for this chassis.
Nov 29 03:07:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:32Z|00199|binding|INFO|0dfa3388-927a-4252-a76a-6599caf67253: Claiming fa:16:3e:e6:0d:01 10.100.0.9
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.853 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0d:01 10.100.0.9'], port_security=['fa:16:3e:e6:0d:01 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ed7a6200-c4d2-4554-bdb4-57e02fa79386', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c57433fd3834430904b1908f24f3f2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ebe351b1-d353-46d5-990d-7ccc905f95cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d7455e-493a-4184-8d60-e2fd6ef2393b, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0dfa3388-927a-4252-a76a-6599caf67253) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.854 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0dfa3388-927a-4252-a76a-6599caf67253 in datapath 0de30c6a-82ca-4f9f-a37d-5949a70a385d bound to our chassis#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.855 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de30c6a-82ca-4f9f-a37d-5949a70a385d#033[00m
Nov 29 03:07:32 np0005539552 systemd-udevd[260920]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.868 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0ff73b-18d8-4058-b3d4-25f10320203c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.870 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de30c6a-81 in ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.872 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de30c6a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c7227541-8e17-43df-aac2-8d9cdbc96f78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 systemd-machined[196379]: New machine qemu-24-instance-00000044.
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.873 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d404d7f3-69a9-46ab-8bad-b626d299136e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 systemd[1]: Started Virtual Machine qemu-24-instance-00000044.
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.885 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[1898bf3c-19c1-4cac-888b-09d7dd650ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 NetworkManager[48926]: <info>  [1764403652.8868] device (tap0dfa3388-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:32 np0005539552 NetworkManager[48926]: <info>  [1764403652.8875] device (tap0dfa3388-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.908 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[53b570ad-6031-4601-8b28-38da448f28ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.919 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:32Z|00200|binding|INFO|Setting lport 0dfa3388-927a-4252-a76a-6599caf67253 ovn-installed in OVS
Nov 29 03:07:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:32Z|00201|binding|INFO|Setting lport 0dfa3388-927a-4252-a76a-6599caf67253 up in Southbound
Nov 29 03:07:32 np0005539552 nova_compute[233724]: 2025-11-29 08:07:32.926 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.935 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1bdbab-ca6b-462b-a19c-96ac82e61438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 systemd-udevd[260924]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:32 np0005539552 NetworkManager[48926]: <info>  [1764403652.9425] manager: (tap0de30c6a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.941 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[07d442c5-f251-4472-a879-f75368f6132b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.971 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf773a2-e20e-47e9-bd4a-833b89bdf5ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.974 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[67f0c5e2-b93c-4376-b3f0-e58970039d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:32 np0005539552 NetworkManager[48926]: <info>  [1764403652.9919] device (tap0de30c6a-80): carrier: link connected
Nov 29 03:07:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:32.995 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[38c3c6ee-b9c8-4e28-9db4-f73544b52a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.011 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc7bdfe-b9ea-454e-9754-125cabb4c1cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c6a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:21:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669867, 'reachable_time': 31100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260953, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.021 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[67ebcca5-55a9-4760-b634-cbc8f3187180]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:215a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669867, 'tstamp': 669867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260954, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.036 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd1eaf4-11f9-4b1a-b622-c598940f52d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c6a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:21:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669867, 'reachable_time': 31100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260955, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.066 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f870e9-8d9e-4cb3-abcd-d7cf2a595137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.121 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41cd7008-5648-4453-b41c-ef64f0223d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c6a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de30c6a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:33 np0005539552 NetworkManager[48926]: <info>  [1764403653.1247] manager: (tap0de30c6a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 03:07:33 np0005539552 kernel: tap0de30c6a-80: entered promiscuous mode
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.127 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de30c6a-80, col_values=(('external_ids', {'iface-id': 'db5f456f-a9cd-44e0-9bf4-deda3979e911'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:33 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:33Z|00202|binding|INFO|Releasing lport db5f456f-a9cd-44e0-9bf4-deda3979e911 from this chassis (sb_readonly=0)
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.145 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.146 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.147 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f003503-8220-4332-b954-539605d22e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.148 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-0de30c6a-82ca-4f9f-a37d-5949a70a385d
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 0de30c6a-82ca-4f9f-a37d-5949a70a385d
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:33 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:33.150 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'env', 'PROCESS_TAG=haproxy-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de30c6a-82ca-4f9f-a37d-5949a70a385d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.345 233728 DEBUG nova.network.neutron [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Updated VIF entry in instance network info cache for port 0dfa3388-927a-4252-a76a-6599caf67253. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.345 233728 DEBUG nova.network.neutron [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Updating instance_info_cache with network_info: [{"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.363 233728 DEBUG oslo_concurrency.lockutils [req-503d2830-6d8c-49e6-ad12-46ab156e37b0 req-9529ec7d-8538-4d96-bc87-31005b8d2bd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ed7a6200-c4d2-4554-bdb4-57e02fa79386" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.429 233728 DEBUG nova.compute.manager [req-6a9d6962-eb4a-44e2-8a43-1beecbe4a39d req-2ab1c9e4-6da2-4d41-9549-c9a63179786e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.430 233728 DEBUG oslo_concurrency.lockutils [req-6a9d6962-eb4a-44e2-8a43-1beecbe4a39d req-2ab1c9e4-6da2-4d41-9549-c9a63179786e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.430 233728 DEBUG oslo_concurrency.lockutils [req-6a9d6962-eb4a-44e2-8a43-1beecbe4a39d req-2ab1c9e4-6da2-4d41-9549-c9a63179786e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.430 233728 DEBUG oslo_concurrency.lockutils [req-6a9d6962-eb4a-44e2-8a43-1beecbe4a39d req-2ab1c9e4-6da2-4d41-9549-c9a63179786e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.430 233728 DEBUG nova.compute.manager [req-6a9d6962-eb4a-44e2-8a43-1beecbe4a39d req-2ab1c9e4-6da2-4d41-9549-c9a63179786e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Processing event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:33 np0005539552 podman[261005]: 2025-11-29 08:07:33.490720573 +0000 UTC m=+0.048986683 container create 155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:07:33 np0005539552 systemd[1]: Started libpod-conmon-155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9.scope.
Nov 29 03:07:33 np0005539552 podman[261005]: 2025-11-29 08:07:33.462980255 +0000 UTC m=+0.021246385 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:33 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:07:33 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5773f64ff2d51cef463116049a0271c548f9553be6037029145aca6d939ccc96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:33 np0005539552 podman[261005]: 2025-11-29 08:07:33.58099692 +0000 UTC m=+0.139263040 container init 155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:33 np0005539552 podman[261005]: 2025-11-29 08:07:33.586270443 +0000 UTC m=+0.144536553 container start 155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:07:33 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [NOTICE]   (261042) : New worker (261045) forked
Nov 29 03:07:33 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [NOTICE]   (261042) : Loading success.
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.705 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.706 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403653.705049, ed7a6200-c4d2-4554-bdb4-57e02fa79386 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.706 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.710 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.714 233728 INFO nova.virt.libvirt.driver [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Instance spawned successfully.#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.715 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.731 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.736 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.739 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.739 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.740 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.740 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.741 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.741 233728 DEBUG nova.virt.libvirt.driver [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.771 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.771 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403653.705113, ed7a6200-c4d2-4554-bdb4-57e02fa79386 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.772 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.808 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.811 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403653.7095413, ed7a6200-c4d2-4554-bdb4-57e02fa79386 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.811 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.827 233728 INFO nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Took 8.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.828 233728 DEBUG nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.839 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.842 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.867 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.891 233728 INFO nova.compute.manager [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Took 9.61 seconds to build instance.#033[00m
Nov 29 03:07:33 np0005539552 nova_compute[233724]: 2025-11-29 08:07:33.908 233728 DEBUG oslo_concurrency.lockutils [None req-5e1d09a5-b4de-4cd6-becd-bb3e862681b8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:34.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.248 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.319 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.526 233728 DEBUG nova.compute.manager [req-35148a97-5805-420c-8db8-b9753e98e002 req-ea764b78-8a9a-4d1d-9992-e73694011f54 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.526 233728 DEBUG oslo_concurrency.lockutils [req-35148a97-5805-420c-8db8-b9753e98e002 req-ea764b78-8a9a-4d1d-9992-e73694011f54 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.526 233728 DEBUG oslo_concurrency.lockutils [req-35148a97-5805-420c-8db8-b9753e98e002 req-ea764b78-8a9a-4d1d-9992-e73694011f54 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.526 233728 DEBUG oslo_concurrency.lockutils [req-35148a97-5805-420c-8db8-b9753e98e002 req-ea764b78-8a9a-4d1d-9992-e73694011f54 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.527 233728 DEBUG nova.compute.manager [req-35148a97-5805-420c-8db8-b9753e98e002 req-ea764b78-8a9a-4d1d-9992-e73694011f54 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] No waiting events found dispatching network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:35 np0005539552 nova_compute[233724]: 2025-11-29 08:07:35.527 233728 WARNING nova.compute.manager [req-35148a97-5805-420c-8db8-b9753e98e002 req-ea764b78-8a9a-4d1d-9992-e73694011f54 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received unexpected event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:07:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:38.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:07:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2224902501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:07:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:07:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2224902501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:07:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:38.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:40 np0005539552 nova_compute[233724]: 2025-11-29 08:07:40.251 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:40 np0005539552 nova_compute[233724]: 2025-11-29 08:07:40.320 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:40.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 03:07:41 np0005539552 nova_compute[233724]: 2025-11-29 08:07:41.861 233728 DEBUG nova.compute.manager [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:41 np0005539552 nova_compute[233724]: 2025-11-29 08:07:41.912 233728 INFO nova.compute.manager [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] instance snapshotting#033[00m
Nov 29 03:07:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 03:07:42 np0005539552 nova_compute[233724]: 2025-11-29 08:07:42.226 233728 INFO nova.virt.libvirt.driver [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Beginning live snapshot process#033[00m
Nov 29 03:07:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:42 np0005539552 nova_compute[233724]: 2025-11-29 08:07:42.390 233728 DEBUG nova.virt.libvirt.imagebackend [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:07:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:42.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:42 np0005539552 nova_compute[233724]: 2025-11-29 08:07:42.768 233728 DEBUG nova.storage.rbd_utils [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] creating snapshot(b95e18e11d054dc69e2b80242300e76e) on rbd image(ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:42.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 03:07:43 np0005539552 nova_compute[233724]: 2025-11-29 08:07:43.476 233728 DEBUG nova.storage.rbd_utils [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] cloning vms/ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk@b95e18e11d054dc69e2b80242300e76e to images/123394a7-8fc1-4d5c-8e9e-60cf6e1a8762 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:07:43 np0005539552 nova_compute[233724]: 2025-11-29 08:07:43.594 233728 DEBUG nova.storage.rbd_utils [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] flattening images/123394a7-8fc1-4d5c-8e9e-60cf6e1a8762 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:07:43 np0005539552 nova_compute[233724]: 2025-11-29 08:07:43.910 233728 DEBUG nova.storage.rbd_utils [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] removing snapshot(b95e18e11d054dc69e2b80242300e76e) on rbd image(ed7a6200-c4d2-4554-bdb4-57e02fa79386_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:07:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 03:07:44 np0005539552 nova_compute[233724]: 2025-11-29 08:07:44.473 233728 DEBUG nova.storage.rbd_utils [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] creating snapshot(snap) on rbd image(123394a7-8fc1-4d5c-8e9e-60cf6e1a8762) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:44.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:44.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.254 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.322 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 123394a7-8fc1-4d5c-8e9e-60cf6e1a8762 could not be found.
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 123394a7-8fc1-4d5c-8e9e-60cf6e1a8762
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver 
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver 
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 03:07:45 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 123394a7-8fc1-4d5c-8e9e-60cf6e1a8762 could not be found.
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.623 233728 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 03:07:45 np0005539552 nova_compute[233724]: 2025-11-29 08:07:45.682 233728 DEBUG nova.storage.rbd_utils [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] removing snapshot(snap) on rbd image(123394a7-8fc1-4d5c-8e9e-60cf6e1a8762) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:07:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 03:07:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:46.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:46Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0d:01 10.100.0.9
Nov 29 03:07:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:46Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0d:01 10.100.0.9
Nov 29 03:07:46 np0005539552 nova_compute[233724]: 2025-11-29 08:07:46.936 233728 WARNING nova.compute.manager [None req-a45d14bb-e390-4b4f-8f82-83e6aa7dcea4 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Image not found during snapshot: nova.exception.ImageNotFound: Image 123394a7-8fc1-4d5c-8e9e-60cf6e1a8762 could not be found.#033[00m
Nov 29 03:07:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:46.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 03:07:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:48.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:48 np0005539552 podman[261247]: 2025-11-29 08:07:48.975532589 +0000 UTC m=+0.062785725 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:07:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:48.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:49 np0005539552 podman[261248]: 2025-11-29 08:07:48.999836696 +0000 UTC m=+0.082541730 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:49 np0005539552 podman[261249]: 2025-11-29 08:07:49.000930295 +0000 UTC m=+0.084354688 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.200 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.200 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.201 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.201 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.201 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.202 233728 INFO nova.compute.manager [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Terminating instance#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.203 233728 DEBUG nova.compute.manager [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:07:49 np0005539552 kernel: tap0dfa3388-92 (unregistering): left promiscuous mode
Nov 29 03:07:49 np0005539552 NetworkManager[48926]: <info>  [1764403669.2700] device (tap0dfa3388-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:49Z|00203|binding|INFO|Releasing lport 0dfa3388-927a-4252-a76a-6599caf67253 from this chassis (sb_readonly=0)
Nov 29 03:07:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:49Z|00204|binding|INFO|Setting lport 0dfa3388-927a-4252-a76a-6599caf67253 down in Southbound
Nov 29 03:07:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:49Z|00205|binding|INFO|Removing iface tap0dfa3388-92 ovn-installed in OVS
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.282 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.291 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0d:01 10.100.0.9'], port_security=['fa:16:3e:e6:0d:01 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ed7a6200-c4d2-4554-bdb4-57e02fa79386', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c57433fd3834430904b1908f24f3f2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ebe351b1-d353-46d5-990d-7ccc905f95cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d7455e-493a-4184-8d60-e2fd6ef2393b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0dfa3388-927a-4252-a76a-6599caf67253) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.293 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0dfa3388-927a-4252-a76a-6599caf67253 in datapath 0de30c6a-82ca-4f9f-a37d-5949a70a385d unbound from our chassis#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.294 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de30c6a-82ca-4f9f-a37d-5949a70a385d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.296 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[415960c5-c280-479a-9689-997bb43cd325]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.296 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d namespace which is not needed anymore#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.306 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 29 03:07:49 np0005539552 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000044.scope: Consumed 14.207s CPU time.
Nov 29 03:07:49 np0005539552 systemd-machined[196379]: Machine qemu-24-instance-00000044 terminated.
Nov 29 03:07:49 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [NOTICE]   (261042) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:49 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [NOTICE]   (261042) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:49 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [WARNING]  (261042) : Exiting Master process...
Nov 29 03:07:49 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [WARNING]  (261042) : Exiting Master process...
Nov 29 03:07:49 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [ALERT]    (261042) : Current worker (261045) exited with code 143 (Terminated)
Nov 29 03:07:49 np0005539552 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[261030]: [WARNING]  (261042) : All workers exited. Exiting... (0)
Nov 29 03:07:49 np0005539552 systemd[1]: libpod-155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9.scope: Deactivated successfully.
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.426 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.431 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 podman[261336]: 2025-11-29 08:07:49.431767645 +0000 UTC m=+0.047282447 container died 155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.439 233728 INFO nova.virt.libvirt.driver [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Instance destroyed successfully.#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.440 233728 DEBUG nova.objects.instance [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'resources' on Instance uuid ed7a6200-c4d2-4554-bdb4-57e02fa79386 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.455 233728 DEBUG nova.virt.libvirt.vif [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1393911926',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1393911926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1393911926',id=68,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-mo4931n3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:46Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=ed7a6200-c4d2-4554-bdb4-57e02fa79386,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.456 233728 DEBUG nova.network.os_vif_util [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "0dfa3388-927a-4252-a76a-6599caf67253", "address": "fa:16:3e:e6:0d:01", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfa3388-92", "ovs_interfaceid": "0dfa3388-927a-4252-a76a-6599caf67253", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.457 233728 DEBUG nova.network.os_vif_util [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.457 233728 DEBUG os_vif [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.461 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.461 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0dfa3388-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.465 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay-5773f64ff2d51cef463116049a0271c548f9553be6037029145aca6d939ccc96-merged.mount: Deactivated successfully.
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.468 233728 INFO os_vif [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0d:01,bridge_name='br-int',has_traffic_filtering=True,id=0dfa3388-927a-4252-a76a-6599caf67253,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfa3388-92')#033[00m
Nov 29 03:07:49 np0005539552 podman[261336]: 2025-11-29 08:07:49.481424715 +0000 UTC m=+0.096939497 container cleanup 155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:07:49 np0005539552 systemd[1]: libpod-conmon-155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9.scope: Deactivated successfully.
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.495 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.496 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.511 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:49 np0005539552 podman[261391]: 2025-11-29 08:07:49.54419174 +0000 UTC m=+0.041706307 container remove 155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.550 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e483da8c-b044-4a21-9b9d-92b5c6a79bea]: (4, ('Sat Nov 29 08:07:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d (155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9)\n155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9\nSat Nov 29 08:07:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d (155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9)\n155a726ec33aed495fe6ebd07cee00f92b7ba94b78a8519f53c45422ac96caf9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.552 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[64a2d9d9-67e3-43ba-bb26-6f90760f47d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.553 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c6a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.555 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 kernel: tap0de30c6a-80: left promiscuous mode
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.570 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.573 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c7716473-e0da-4671-8318-a92d66bc2f28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.586 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.586 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.590 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[03dd985f-fdc2-47b8-98e4-9808503f1bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.591 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[232eb4b0-f096-4130-8c57-1aecf0529434]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.594 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.594 233728 INFO nova.compute.claims [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.605 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[46c06009-cb7b-45dc-8fbb-47b04eb88923]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669861, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261409, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.609 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:49.609 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[5f306a39-828a-42f0-981d-1a6de3b2caf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539552 systemd[1]: run-netns-ovnmeta\x2d0de30c6a\x2d82ca\x2d4f9f\x2da37d\x2d5949a70a385d.mount: Deactivated successfully.
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.648 233728 DEBUG nova.compute.manager [req-305d6d6a-8d6f-4f83-a29a-c2ed7554a6e5 req-fbbf5699-cb9e-4cc6-9ecd-9d788dbd174d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-vif-unplugged-0dfa3388-927a-4252-a76a-6599caf67253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.648 233728 DEBUG oslo_concurrency.lockutils [req-305d6d6a-8d6f-4f83-a29a-c2ed7554a6e5 req-fbbf5699-cb9e-4cc6-9ecd-9d788dbd174d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.648 233728 DEBUG oslo_concurrency.lockutils [req-305d6d6a-8d6f-4f83-a29a-c2ed7554a6e5 req-fbbf5699-cb9e-4cc6-9ecd-9d788dbd174d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.649 233728 DEBUG oslo_concurrency.lockutils [req-305d6d6a-8d6f-4f83-a29a-c2ed7554a6e5 req-fbbf5699-cb9e-4cc6-9ecd-9d788dbd174d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.649 233728 DEBUG nova.compute.manager [req-305d6d6a-8d6f-4f83-a29a-c2ed7554a6e5 req-fbbf5699-cb9e-4cc6-9ecd-9d788dbd174d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] No waiting events found dispatching network-vif-unplugged-0dfa3388-927a-4252-a76a-6599caf67253 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.650 233728 DEBUG nova.compute.manager [req-305d6d6a-8d6f-4f83-a29a-c2ed7554a6e5 req-fbbf5699-cb9e-4cc6-9ecd-9d788dbd174d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-vif-unplugged-0dfa3388-927a-4252-a76a-6599caf67253 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.727 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.902 233728 INFO nova.virt.libvirt.driver [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Deleting instance files /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386_del#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.904 233728 INFO nova.virt.libvirt.driver [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Deletion of /var/lib/nova/instances/ed7a6200-c4d2-4554-bdb4-57e02fa79386_del complete#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.957 233728 INFO nova.compute.manager [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.958 233728 DEBUG oslo.service.loopingcall [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.958 233728 DEBUG nova.compute.manager [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:07:49 np0005539552 nova_compute[233724]: 2025-11-29 08:07:49.959 233728 DEBUG nova.network.neutron [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:07:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1161644685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.166 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.173 233728 DEBUG nova.compute.provider_tree [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.198 233728 DEBUG nova.scheduler.client.report [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.238 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.239 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.255 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.299 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.300 233728 DEBUG nova.network.neutron [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.324 233728 INFO nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.347 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.465 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.466 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.466 233728 INFO nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Creating image(s)#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.497 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.538 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.565 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.568 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.632 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.633 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.634 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.634 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.663 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:50 np0005539552 nova_compute[233724]: 2025-11-29 08:07:50.666 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:50.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:50.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.214 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.283 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] resizing rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.388 233728 DEBUG nova.objects.instance [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.404 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.405 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Ensure instance console log exists: /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.405 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.405 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.406 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:51 np0005539552 nova_compute[233724]: 2025-11-29 08:07:51.449 233728 DEBUG nova.policy [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0fdc99d9764df9a0868625cd14f49a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6b2c950ad924f9c9d1a44696045a508', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 03:07:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 03:07:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 03:07:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:52.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.042 233728 DEBUG nova.compute.manager [req-02f3aed9-887f-4bce-87e1-85136a6c0aa9 req-58c0a69f-5311-4af8-acfe-968699f20c4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.042 233728 DEBUG oslo_concurrency.lockutils [req-02f3aed9-887f-4bce-87e1-85136a6c0aa9 req-58c0a69f-5311-4af8-acfe-968699f20c4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.043 233728 DEBUG oslo_concurrency.lockutils [req-02f3aed9-887f-4bce-87e1-85136a6c0aa9 req-58c0a69f-5311-4af8-acfe-968699f20c4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.043 233728 DEBUG oslo_concurrency.lockutils [req-02f3aed9-887f-4bce-87e1-85136a6c0aa9 req-58c0a69f-5311-4af8-acfe-968699f20c4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.043 233728 DEBUG nova.compute.manager [req-02f3aed9-887f-4bce-87e1-85136a6c0aa9 req-58c0a69f-5311-4af8-acfe-968699f20c4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] No waiting events found dispatching network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.043 233728 WARNING nova.compute.manager [req-02f3aed9-887f-4bce-87e1-85136a6c0aa9 req-58c0a69f-5311-4af8-acfe-968699f20c4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received unexpected event network-vif-plugged-0dfa3388-927a-4252-a76a-6599caf67253 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.191 233728 DEBUG nova.network.neutron [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.207 233728 INFO nova.compute.manager [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Took 3.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.262 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.262 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.275 233728 DEBUG nova.compute.manager [req-ecc13836-c238-4da7-8e07-a27aedbf987e req-25bea9f7-6370-47ae-8553-3586ab1945e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Received event network-vif-deleted-0dfa3388-927a-4252-a76a-6599caf67253 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.315 233728 DEBUG oslo_concurrency.processutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1491982643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.745 233728 DEBUG oslo_concurrency.processutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.751 233728 DEBUG nova.compute.provider_tree [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.768 233728 DEBUG nova.scheduler.client.report [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.813 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:53 np0005539552 nova_compute[233724]: 2025-11-29 08:07:53.941 233728 INFO nova.scheduler.client.report [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Deleted allocations for instance ed7a6200-c4d2-4554-bdb4-57e02fa79386#033[00m
Nov 29 03:07:54 np0005539552 nova_compute[233724]: 2025-11-29 08:07:54.052 233728 DEBUG oslo_concurrency.lockutils [None req-3d92dd60-5c5c-4c6c-aeef-d400bee9f3d8 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "ed7a6200-c4d2-4554-bdb4-57e02fa79386" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:54 np0005539552 nova_compute[233724]: 2025-11-29 08:07:54.096 233728 DEBUG nova.network.neutron [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Successfully created port: da77bd23-8c00-4b8b-b4a6-7520c77f0352 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:54 np0005539552 nova_compute[233724]: 2025-11-29 08:07:54.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:54.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:55.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.055 233728 DEBUG nova.network.neutron [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Successfully updated port: da77bd23-8c00-4b8b-b4a6-7520c77f0352 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.078 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.078 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquired lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.079 233728 DEBUG nova.network.neutron [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.214 233728 DEBUG nova.network.neutron [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.258 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.451 233728 DEBUG nova.compute.manager [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-changed-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.451 233728 DEBUG nova.compute.manager [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Refreshing instance network info cache due to event network-changed-da77bd23-8c00-4b8b-b4a6-7520c77f0352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:55 np0005539552 nova_compute[233724]: 2025-11-29 08:07:55.452 233728 DEBUG oslo_concurrency.lockutils [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.601 233728 DEBUG nova.network.neutron [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Updating instance_info_cache with network_info: [{"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.618 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Releasing lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.618 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Instance network_info: |[{"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.619 233728 DEBUG oslo_concurrency.lockutils [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.619 233728 DEBUG nova.network.neutron [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Refreshing network info cache for port da77bd23-8c00-4b8b-b4a6-7520c77f0352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.622 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Start _get_guest_xml network_info=[{"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.627 233728 WARNING nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.631 233728 DEBUG nova.virt.libvirt.host [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.632 233728 DEBUG nova.virt.libvirt.host [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.636 233728 DEBUG nova.virt.libvirt.host [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.637 233728 DEBUG nova.virt.libvirt.host [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.638 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.638 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.638 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.639 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.639 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.639 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.639 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.639 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.640 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.640 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.640 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.640 233728 DEBUG nova.virt.hardware [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:56 np0005539552 nova_compute[233724]: 2025-11-29 08:07:56.643 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:56.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 03:07:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:57.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2706580805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.110 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.137 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.142 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1548560425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.584 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.586 233728 DEBUG nova.virt.libvirt.vif [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2104603624',display_name='tempest-ServersTestManualDisk-server-2104603624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2104603624',id=70,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBITg07Ksl6rX6XSnPz2ZFOLM+UbS1RduVqsaHCesvsjGltRR7ggJCBhKntRiq+XCEWG3WWkGkr8Y/aGe++Fv7ErxmlUZGsivlhvwvavslAd7PL19D7cim3dDuKnBF59png==',key_name='tempest-keypair-115054336',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6b2c950ad924f9c9d1a44696045a508',ramdisk_id='',reservation_id='r-aflct8m2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-722013135',owner_user_name='tempest-ServersTestManualDisk-722013135-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d0fdc99d9764df9a0868625cd14f49a',uuid=7ee7bf74-a9de-45ba-bfca-1be4702ce6e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.587 233728 DEBUG nova.network.os_vif_util [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Converting VIF {"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.588 233728 DEBUG nova.network.os_vif_util [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.589 233728 DEBUG nova.objects.instance [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.628 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <uuid>7ee7bf74-a9de-45ba-bfca-1be4702ce6e3</uuid>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <name>instance-00000046</name>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersTestManualDisk-server-2104603624</nova:name>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:07:56</nova:creationTime>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:user uuid="4d0fdc99d9764df9a0868625cd14f49a">tempest-ServersTestManualDisk-722013135-project-member</nova:user>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:project uuid="b6b2c950ad924f9c9d1a44696045a508">tempest-ServersTestManualDisk-722013135</nova:project>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <nova:port uuid="da77bd23-8c00-4b8b-b4a6-7520c77f0352">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <entry name="serial">7ee7bf74-a9de-45ba-bfca-1be4702ce6e3</entry>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <entry name="uuid">7ee7bf74-a9de-45ba-bfca-1be4702ce6e3</entry>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk.config">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:54:16:c7"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <target dev="tapda77bd23-8c"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/console.log" append="off"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:07:57 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:07:57 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:07:57 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:07:57 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.630 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Preparing to wait for external event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.630 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.631 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.631 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.632 233728 DEBUG nova.virt.libvirt.vif [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2104603624',display_name='tempest-ServersTestManualDisk-server-2104603624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2104603624',id=70,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBITg07Ksl6rX6XSnPz2ZFOLM+UbS1RduVqsaHCesvsjGltRR7ggJCBhKntRiq+XCEWG3WWkGkr8Y/aGe++Fv7ErxmlUZGsivlhvwvavslAd7PL19D7cim3dDuKnBF59png==',key_name='tempest-keypair-115054336',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6b2c950ad924f9c9d1a44696045a508',ramdisk_id='',reservation_id='r-aflct8m2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-722013135',owner_user_name='tempest-ServersTestManualDisk-722013135-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d0fdc99d9764df9a0868625cd14f49a',uuid=7ee7bf74-a9de-45ba-bfca-1be4702ce6e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.632 233728 DEBUG nova.network.os_vif_util [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Converting VIF {"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.633 233728 DEBUG nova.network.os_vif_util [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.634 233728 DEBUG os_vif [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.637 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.638 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.638 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.644 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.645 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda77bd23-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.647 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda77bd23-8c, col_values=(('external_ids', {'iface-id': 'da77bd23-8c00-4b8b-b4a6-7520c77f0352', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:16:c7', 'vm-uuid': '7ee7bf74-a9de-45ba-bfca-1be4702ce6e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539552 NetworkManager[48926]: <info>  [1764403677.6506] manager: (tapda77bd23-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.651 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.654 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.661 233728 INFO os_vif [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c')#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.724 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.725 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.725 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] No VIF found with MAC fa:16:3e:54:16:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.726 233728 INFO nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Using config drive#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.759 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.984 233728 DEBUG nova.network.neutron [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Updated VIF entry in instance network info cache for port da77bd23-8c00-4b8b-b4a6-7520c77f0352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:57 np0005539552 nova_compute[233724]: 2025-11-29 08:07:57.985 233728 DEBUG nova.network.neutron [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Updating instance_info_cache with network_info: [{"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.040 233728 DEBUG oslo_concurrency.lockutils [req-2e1ceba8-7910-45af-8ae1-cad763f8ad34 req-1023d836-0d40-4e72-a358-154490fde5d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.159 233728 INFO nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Creating config drive at /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/disk.config#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.166 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4_tfz_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.296 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo4_tfz_o" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.328 233728 DEBUG nova.storage.rbd_utils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] rbd image 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.333 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/disk.config 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.507 233728 DEBUG oslo_concurrency.processutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/disk.config 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.508 233728 INFO nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Deleting local config drive /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:58 np0005539552 kernel: tapda77bd23-8c: entered promiscuous mode
Nov 29 03:07:58 np0005539552 NetworkManager[48926]: <info>  [1764403678.5627] manager: (tapda77bd23-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 03:07:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:58Z|00206|binding|INFO|Claiming lport da77bd23-8c00-4b8b-b4a6-7520c77f0352 for this chassis.
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.563 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:58Z|00207|binding|INFO|da77bd23-8c00-4b8b-b4a6-7520c77f0352: Claiming fa:16:3e:54:16:c7 10.100.0.13
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.569 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.574 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:16:c7 10.100.0.13'], port_security=['fa:16:3e:54:16:c7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7ee7bf74-a9de-45ba-bfca-1be4702ce6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9466489-b6be-467b-8322-871db6f3aea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6b2c950ad924f9c9d1a44696045a508', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e7fb14b-2449-4fa5-aee5-f05d9772f4a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37f28177-df1c-499c-a627-e417ef64755b, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=da77bd23-8c00-4b8b-b4a6-7520c77f0352) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.575 143400 INFO neutron.agent.ovn.metadata.agent [-] Port da77bd23-8c00-4b8b-b4a6-7520c77f0352 in datapath b9466489-b6be-467b-8322-871db6f3aea4 bound to our chassis#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.577 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9466489-b6be-467b-8322-871db6f3aea4#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.586 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[24bbb592-b609-421f-a0ac-3f575c4532a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.587 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9466489-b1 in ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.589 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9466489-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.589 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3dea74c4-ee3b-4313-9055-45c25b54004a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.589 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b63cb28d-188e-41ac-8595-d1ac8a2bc6e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.602 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[3e18a193-54fb-4c79-9f85-acf912f08c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 systemd-machined[196379]: New machine qemu-25-instance-00000046.
Nov 29 03:07:58 np0005539552 systemd[1]: Started Virtual Machine qemu-25-instance-00000046.
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.625 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[307751f0-fdaa-4ec3-a5e0-c4a61f5531b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 systemd-udevd[261817]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:58 np0005539552 NetworkManager[48926]: <info>  [1764403678.6387] device (tapda77bd23-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:58 np0005539552 NetworkManager[48926]: <info>  [1764403678.6395] device (tapda77bd23-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.648 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:58Z|00208|binding|INFO|Setting lport da77bd23-8c00-4b8b-b4a6-7520c77f0352 ovn-installed in OVS
Nov 29 03:07:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:58Z|00209|binding|INFO|Setting lport da77bd23-8c00-4b8b-b4a6-7520c77f0352 up in Southbound
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.657 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.659 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[44c3878a-a1e7-4559-96da-2fd7af74eac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.665 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9df640e6-5c45-4cc7-9ef9-a181b6c1ce5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 NetworkManager[48926]: <info>  [1764403678.6659] manager: (tapb9466489-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 03:07:58 np0005539552 systemd-udevd[261823]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.692 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[21581b42-cdf7-41d0-94a1-42c29b55f8ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.695 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c50cc8ea-3f78-4f71-ad0c-4851df13f4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:58 np0005539552 NetworkManager[48926]: <info>  [1764403678.7176] device (tapb9466489-b0): carrier: link connected
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.722 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b864ae15-2ca4-4385-a109-72c4241371db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.739 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ecca82-bd2e-41b3-b1c1-c8d432062adf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9466489-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:4e:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672440, 'reachable_time': 27032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261848, 'error': None, 'target': 'ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.754 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fa748f51-543e-46af-a0b4-21d98c519fb7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:4ed8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672440, 'tstamp': 672440}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261849, 'error': None, 'target': 'ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.773 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[da4d3869-b9e2-4354-9cd4-4f143d1a33f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9466489-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:4e:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672440, 'reachable_time': 27032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261850, 'error': None, 'target': 'ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.803 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[471441bc-3062-485d-87be-6804b4af75e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.862 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6962b1b3-1ea8-48ba-bd74-852e50a6c601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.865 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9466489-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.865 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.866 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9466489-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:58 np0005539552 kernel: tapb9466489-b0: entered promiscuous mode
Nov 29 03:07:58 np0005539552 NetworkManager[48926]: <info>  [1764403678.8680] manager: (tapb9466489-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.867 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.870 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9466489-b0, col_values=(('external_ids', {'iface-id': '168133ed-9028-4fad-a6c8-73944e9495bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:07:58Z|00210|binding|INFO|Releasing lport 168133ed-9028-4fad-a6c8-73944e9495bc from this chassis (sb_readonly=0)
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 nova_compute[233724]: 2025-11-29 08:07:58.887 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.888 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9466489-b6be-467b-8322-871db6f3aea4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9466489-b6be-467b-8322-871db6f3aea4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.889 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[85e1492f-bead-4162-a223-df31ee399a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.890 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-b9466489-b6be-467b-8322-871db6f3aea4
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/b9466489-b6be-467b-8322-871db6f3aea4.pid.haproxy
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID b9466489-b6be-467b-8322-871db6f3aea4
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:07:58.890 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4', 'env', 'PROCESS_TAG=haproxy-b9466489-b6be-467b-8322-871db6f3aea4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9466489-b6be-467b-8322-871db6f3aea4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:07:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:59.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.072 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403679.0723026, 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.073 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.094 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.098 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403679.0724819, 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.099 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.119 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.124 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:59 np0005539552 nova_compute[233724]: 2025-11-29 08:07:59.144 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:59 np0005539552 podman[261924]: 2025-11-29 08:07:59.27451867 +0000 UTC m=+0.058566532 container create 19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:59 np0005539552 systemd[1]: Started libpod-conmon-19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b.scope.
Nov 29 03:07:59 np0005539552 podman[261924]: 2025-11-29 08:07:59.239237408 +0000 UTC m=+0.023285260 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:59 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:07:59 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0766dc8bc3618d760ca145f06830a6c9e576b3eba25b3e91e72aa02f79ddba4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:59 np0005539552 podman[261924]: 2025-11-29 08:07:59.366493663 +0000 UTC m=+0.150541505 container init 19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:59 np0005539552 podman[261924]: 2025-11-29 08:07:59.371462287 +0000 UTC m=+0.155510109 container start 19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:59 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [NOTICE]   (261943) : New worker (261945) forked
Nov 29 03:07:59 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [NOTICE]   (261943) : Loading success.
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.260 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.545 233728 DEBUG nova.compute.manager [req-8ea4b75a-05e7-4f5e-8c06-0276e76cf551 req-8c49096b-8297-4729-9010-a70db25e853e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.546 233728 DEBUG oslo_concurrency.lockutils [req-8ea4b75a-05e7-4f5e-8c06-0276e76cf551 req-8c49096b-8297-4729-9010-a70db25e853e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.546 233728 DEBUG oslo_concurrency.lockutils [req-8ea4b75a-05e7-4f5e-8c06-0276e76cf551 req-8c49096b-8297-4729-9010-a70db25e853e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.546 233728 DEBUG oslo_concurrency.lockutils [req-8ea4b75a-05e7-4f5e-8c06-0276e76cf551 req-8c49096b-8297-4729-9010-a70db25e853e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.546 233728 DEBUG nova.compute.manager [req-8ea4b75a-05e7-4f5e-8c06-0276e76cf551 req-8c49096b-8297-4729-9010-a70db25e853e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Processing event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.547 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.550 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403680.5507157, 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.551 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.553 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.556 233728 INFO nova.virt.libvirt.driver [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Instance spawned successfully.#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.556 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.573 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.579 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.583 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.583 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.584 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.584 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.585 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.585 233728 DEBUG nova.virt.libvirt.driver [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.618 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.662 233728 INFO nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Took 10.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.662 233728 DEBUG nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:00.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.741 233728 INFO nova.compute.manager [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Took 11.18 seconds to build instance.#033[00m
Nov 29 03:08:00 np0005539552 nova_compute[233724]: 2025-11-29 08:08:00.767 233728 DEBUG oslo_concurrency.lockutils [None req-3ec287aa-fe5d-4bad-86b3-e54f52185f13 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:01.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 03:08:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:02 np0005539552 nova_compute[233724]: 2025-11-29 08:08:02.651 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:03 np0005539552 nova_compute[233724]: 2025-11-29 08:08:03.010 233728 DEBUG nova.compute.manager [req-a7fd7771-572c-4572-9d92-7bf3f967b831 req-3b7a5bd7-5c3e-4a73-9ead-7a77b857ba69 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:03 np0005539552 nova_compute[233724]: 2025-11-29 08:08:03.011 233728 DEBUG oslo_concurrency.lockutils [req-a7fd7771-572c-4572-9d92-7bf3f967b831 req-3b7a5bd7-5c3e-4a73-9ead-7a77b857ba69 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:03 np0005539552 nova_compute[233724]: 2025-11-29 08:08:03.011 233728 DEBUG oslo_concurrency.lockutils [req-a7fd7771-572c-4572-9d92-7bf3f967b831 req-3b7a5bd7-5c3e-4a73-9ead-7a77b857ba69 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:03 np0005539552 nova_compute[233724]: 2025-11-29 08:08:03.011 233728 DEBUG oslo_concurrency.lockutils [req-a7fd7771-572c-4572-9d92-7bf3f967b831 req-3b7a5bd7-5c3e-4a73-9ead-7a77b857ba69 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:03 np0005539552 nova_compute[233724]: 2025-11-29 08:08:03.012 233728 DEBUG nova.compute.manager [req-a7fd7771-572c-4572-9d92-7bf3f967b831 req-3b7a5bd7-5c3e-4a73-9ead-7a77b857ba69 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] No waiting events found dispatching network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:03 np0005539552 nova_compute[233724]: 2025-11-29 08:08:03.012 233728 WARNING nova.compute.manager [req-a7fd7771-572c-4572-9d92-7bf3f967b831 req-3b7a5bd7-5c3e-4a73-9ead-7a77b857ba69 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received unexpected event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:04 np0005539552 NetworkManager[48926]: <info>  [1764403684.1695] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.170 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:04 np0005539552 NetworkManager[48926]: <info>  [1764403684.1714] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 03:08:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.284 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:04Z|00211|binding|INFO|Releasing lport 168133ed-9028-4fad-a6c8-73944e9495bc from this chassis (sb_readonly=0)
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.297 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.439 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403669.4379206, ed7a6200-c4d2-4554-bdb4-57e02fa79386 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.439 233728 INFO nova.compute.manager [-] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.468 233728 DEBUG nova.compute.manager [None req-b57aa26a-d259-4dc5-8d44-2ca4b0431918 - - - - - -] [instance: ed7a6200-c4d2-4554-bdb4-57e02fa79386] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.599 233728 DEBUG nova.compute.manager [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-changed-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.599 233728 DEBUG nova.compute.manager [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Refreshing instance network info cache due to event network-changed-da77bd23-8c00-4b8b-b4a6-7520c77f0352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.600 233728 DEBUG oslo_concurrency.lockutils [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.600 233728 DEBUG oslo_concurrency.lockutils [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:04 np0005539552 nova_compute[233724]: 2025-11-29 08:08:04.600 233728 DEBUG nova.network.neutron [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Refreshing network info cache for port da77bd23-8c00-4b8b-b4a6-7520c77f0352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:05.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:05 np0005539552 nova_compute[233724]: 2025-11-29 08:08:05.292 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:06 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:08:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:06.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:07.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:07 np0005539552 nova_compute[233724]: 2025-11-29 08:08:07.435 233728 DEBUG nova.network.neutron [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Updated VIF entry in instance network info cache for port da77bd23-8c00-4b8b-b4a6-7520c77f0352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:07 np0005539552 nova_compute[233724]: 2025-11-29 08:08:07.437 233728 DEBUG nova.network.neutron [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Updating instance_info_cache with network_info: [{"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:08:07 np0005539552 nova_compute[233724]: 2025-11-29 08:08:07.706 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:07 np0005539552 nova_compute[233724]: 2025-11-29 08:08:07.880 233728 DEBUG oslo_concurrency.lockutils [req-d975ca1f-8db6-4d69-b948-6a9bf05886bc req-c90d84b1-60fd-4d02-af9c-cce634d6c9a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:07Z|00212|binding|INFO|Releasing lport 168133ed-9028-4fad-a6c8-73944e9495bc from this chassis (sb_readonly=0)
Nov 29 03:08:07 np0005539552 nova_compute[233724]: 2025-11-29 08:08:07.929 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:09.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:10 np0005539552 nova_compute[233724]: 2025-11-29 08:08:10.326 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:10.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:08:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588324505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:08:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:08:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588324505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:08:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:11.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:11.427 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:11.428 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:08:11 np0005539552 nova_compute[233724]: 2025-11-29 08:08:11.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 03:08:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:12.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:12 np0005539552 nova_compute[233724]: 2025-11-29 08:08:12.734 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:13.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:13.430 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:13Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:16:c7 10.100.0.13
Nov 29 03:08:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:13Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:16:c7 10.100.0.13
Nov 29 03:08:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:15Z|00213|binding|INFO|Releasing lport 168133ed-9028-4fad-a6c8-73944e9495bc from this chassis (sb_readonly=0)
Nov 29 03:08:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:15 np0005539552 nova_compute[233724]: 2025-11-29 08:08:15.089 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:15 np0005539552 nova_compute[233724]: 2025-11-29 08:08:15.327 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 03:08:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:17.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:17 np0005539552 nova_compute[233724]: 2025-11-29 08:08:17.754 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 03:08:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:18.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 03:08:18 np0005539552 nova_compute[233724]: 2025-11-29 08:08:18.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 03:08:19 np0005539552 nova_compute[233724]: 2025-11-29 08:08:19.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:19 np0005539552 nova_compute[233724]: 2025-11-29 08:08:19.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:19 np0005539552 nova_compute[233724]: 2025-11-29 08:08:19.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:19 np0005539552 nova_compute[233724]: 2025-11-29 08:08:19.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:19 np0005539552 nova_compute[233724]: 2025-11-29 08:08:19.955 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:08:19 np0005539552 nova_compute[233724]: 2025-11-29 08:08:19.955 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:19 np0005539552 podman[262198]: 2025-11-29 08:08:19.979519682 +0000 UTC m=+0.063699561 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:08:19 np0005539552 podman[262199]: 2025-11-29 08:08:19.999562053 +0000 UTC m=+0.083670410 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:08:20 np0005539552 podman[262200]: 2025-11-29 08:08:20.024522467 +0000 UTC m=+0.108808928 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.329 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2528068580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.381 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.471 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.472 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:08:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:20.616 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:20.617 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:20.617 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.632 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.634 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4323MB free_disk=20.92196273803711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.634 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.634 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.703 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.704 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.704 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:08:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:20.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:20 np0005539552 nova_compute[233724]: 2025-11-29 08:08:20.742 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1871946506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.157 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.162 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.178 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.199 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.199 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.916 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.917 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.917 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.917 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.918 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.919 233728 INFO nova.compute.manager [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Terminating instance#033[00m
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.920 233728 DEBUG nova.compute.manager [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:21 np0005539552 kernel: tapda77bd23-8c (unregistering): left promiscuous mode
Nov 29 03:08:21 np0005539552 NetworkManager[48926]: <info>  [1764403701.9868] device (tapda77bd23-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:21Z|00214|binding|INFO|Releasing lport da77bd23-8c00-4b8b-b4a6-7520c77f0352 from this chassis (sb_readonly=0)
Nov 29 03:08:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:21Z|00215|binding|INFO|Setting lport da77bd23-8c00-4b8b-b4a6-7520c77f0352 down in Southbound
Nov 29 03:08:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:08:21Z|00216|binding|INFO|Removing iface tapda77bd23-8c ovn-installed in OVS
Nov 29 03:08:21 np0005539552 nova_compute[233724]: 2025-11-29 08:08:21.993 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.000 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:16:c7 10.100.0.13'], port_security=['fa:16:3e:54:16:c7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7ee7bf74-a9de-45ba-bfca-1be4702ce6e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9466489-b6be-467b-8322-871db6f3aea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6b2c950ad924f9c9d1a44696045a508', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e7fb14b-2449-4fa5-aee5-f05d9772f4a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37f28177-df1c-499c-a627-e417ef64755b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=da77bd23-8c00-4b8b-b4a6-7520c77f0352) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.001 143400 INFO neutron.agent.ovn.metadata.agent [-] Port da77bd23-8c00-4b8b-b4a6-7520c77f0352 in datapath b9466489-b6be-467b-8322-871db6f3aea4 unbound from our chassis#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.002 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9466489-b6be-467b-8322-871db6f3aea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.003 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5d92168a-f2aa-4f57-a61a-ccb0f42ec4a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.003 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4 namespace which is not needed anymore#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.013 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539552 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 29 03:08:22 np0005539552 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000046.scope: Consumed 14.462s CPU time.
Nov 29 03:08:22 np0005539552 systemd-machined[196379]: Machine qemu-25-instance-00000046 terminated.
Nov 29 03:08:22 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [NOTICE]   (261943) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:22 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [NOTICE]   (261943) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:22 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [WARNING]  (261943) : Exiting Master process...
Nov 29 03:08:22 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [WARNING]  (261943) : Exiting Master process...
Nov 29 03:08:22 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [ALERT]    (261943) : Current worker (261945) exited with code 143 (Terminated)
Nov 29 03:08:22 np0005539552 neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4[261939]: [WARNING]  (261943) : All workers exited. Exiting... (0)
Nov 29 03:08:22 np0005539552 systemd[1]: libpod-19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b.scope: Deactivated successfully.
Nov 29 03:08:22 np0005539552 podman[262329]: 2025-11-29 08:08:22.138152422 +0000 UTC m=+0.042285073 container died 19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.157 233728 INFO nova.virt.libvirt.driver [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Instance destroyed successfully.#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.159 233728 DEBUG nova.objects.instance [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lazy-loading 'resources' on Instance uuid 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:22 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:22 np0005539552 systemd[1]: var-lib-containers-storage-overlay-a0766dc8bc3618d760ca145f06830a6c9e576b3eba25b3e91e72aa02f79ddba4-merged.mount: Deactivated successfully.
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.180 233728 DEBUG nova.virt.libvirt.vif [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2104603624',display_name='tempest-ServersTestManualDisk-server-2104603624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2104603624',id=70,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBITg07Ksl6rX6XSnPz2ZFOLM+UbS1RduVqsaHCesvsjGltRR7ggJCBhKntRiq+XCEWG3WWkGkr8Y/aGe++Fv7ErxmlUZGsivlhvwvavslAd7PL19D7cim3dDuKnBF59png==',key_name='tempest-keypair-115054336',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6b2c950ad924f9c9d1a44696045a508',ramdisk_id='',reservation_id='r-aflct8m2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-722013135',owner_user_name='tempest-ServersTestManualDisk-722013135-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d0fdc99d9764df9a0868625cd14f49a',uuid=7ee7bf74-a9de-45ba-bfca-1be4702ce6e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.181 233728 DEBUG nova.network.os_vif_util [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Converting VIF {"id": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "address": "fa:16:3e:54:16:c7", "network": {"id": "b9466489-b6be-467b-8322-871db6f3aea4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1529796577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6b2c950ad924f9c9d1a44696045a508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda77bd23-8c", "ovs_interfaceid": "da77bd23-8c00-4b8b-b4a6-7520c77f0352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.182 233728 DEBUG nova.network.os_vif_util [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.183 233728 DEBUG os_vif [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:22 np0005539552 podman[262329]: 2025-11-29 08:08:22.184034721 +0000 UTC m=+0.088167352 container cleanup 19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.184 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.186 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda77bd23-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:22 np0005539552 systemd[1]: libpod-conmon-19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b.scope: Deactivated successfully.
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.201 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.201 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.202 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.227 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.229 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.232 233728 INFO os_vif [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:16:c7,bridge_name='br-int',has_traffic_filtering=True,id=da77bd23-8c00-4b8b-b4a6-7520c77f0352,network=Network(b9466489-b6be-467b-8322-871db6f3aea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda77bd23-8c')#033[00m
Nov 29 03:08:22 np0005539552 podman[262370]: 2025-11-29 08:08:22.248254564 +0000 UTC m=+0.040090843 container remove 19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.255 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7393c8-f179-4ff2-b1a9-1afa6b36bb98]: (4, ('Sat Nov 29 08:08:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4 (19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b)\n19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b\nSat Nov 29 08:08:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4 (19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b)\n19571a0833dc67f694d45a15ab67c91f9607fcc0f554eb14da8b59c4b9efdb0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.257 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[008bc742-82f0-4655-870a-ab8c978e78fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.258 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9466489-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.259 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539552 kernel: tapb9466489-b0: left promiscuous mode
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.276 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.279 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[847340c0-999a-4151-9349-160b1bf19e13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.300 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6711aceb-c602-4aa5-8311-607460e106a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.301 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79c3b433-780d-4f2d-8940-fdb1739d11bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.315 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fb467b39-c38f-4af8-b7a7-175c68cb9473]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672434, 'reachable_time': 34683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262401, 'error': None, 'target': 'ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.317 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9466489-b6be-467b-8322-871db6f3aea4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:08:22.318 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[6e77d100-ae61-4632-81e1-82a8c7f0de70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:22 np0005539552 systemd[1]: run-netns-ovnmeta\x2db9466489\x2db6be\x2d467b\x2d8322\x2d871db6f3aea4.mount: Deactivated successfully.
Nov 29 03:08:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.632 233728 INFO nova.virt.libvirt.driver [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Deleting instance files /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_del#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.633 233728 INFO nova.virt.libvirt.driver [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Deletion of /var/lib/nova/instances/7ee7bf74-a9de-45ba-bfca-1be4702ce6e3_del complete#033[00m
Nov 29 03:08:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:22.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.736 233728 INFO nova.compute.manager [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.737 233728 DEBUG oslo.service.loopingcall [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.737 233728 DEBUG nova.compute.manager [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.737 233728 DEBUG nova.network.neutron [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.811 233728 DEBUG nova.compute.manager [req-bd2f3dd5-b1e7-4ed1-b38e-030ae1d392e1 req-fede8ac9-bfef-4532-8639-6a4edf1ba297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-vif-unplugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.812 233728 DEBUG oslo_concurrency.lockutils [req-bd2f3dd5-b1e7-4ed1-b38e-030ae1d392e1 req-fede8ac9-bfef-4532-8639-6a4edf1ba297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.812 233728 DEBUG oslo_concurrency.lockutils [req-bd2f3dd5-b1e7-4ed1-b38e-030ae1d392e1 req-fede8ac9-bfef-4532-8639-6a4edf1ba297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.812 233728 DEBUG oslo_concurrency.lockutils [req-bd2f3dd5-b1e7-4ed1-b38e-030ae1d392e1 req-fede8ac9-bfef-4532-8639-6a4edf1ba297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.813 233728 DEBUG nova.compute.manager [req-bd2f3dd5-b1e7-4ed1-b38e-030ae1d392e1 req-fede8ac9-bfef-4532-8639-6a4edf1ba297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] No waiting events found dispatching network-vif-unplugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:22 np0005539552 nova_compute[233724]: 2025-11-29 08:08:22.813 233728 DEBUG nova.compute.manager [req-bd2f3dd5-b1e7-4ed1-b38e-030ae1d392e1 req-fede8ac9-bfef-4532-8639-6a4edf1ba297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-vif-unplugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:08:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:23 np0005539552 nova_compute[233724]: 2025-11-29 08:08:23.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:23 np0005539552 nova_compute[233724]: 2025-11-29 08:08:23.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:23 np0005539552 nova_compute[233724]: 2025-11-29 08:08:23.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:23 np0005539552 nova_compute[233724]: 2025-11-29 08:08:23.972 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.536 233728 DEBUG nova.network.neutron [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.551 233728 INFO nova.compute.manager [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Took 1.81 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.594 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.594 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.646 233728 DEBUG oslo_concurrency.processutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:24.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.937 233728 DEBUG nova.compute.manager [req-4dae6a14-f71b-4827-ae93-8a707a9a3437 req-9a0d42a2-14bc-4ccb-82c2-15e0c73d321b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.938 233728 DEBUG oslo_concurrency.lockutils [req-4dae6a14-f71b-4827-ae93-8a707a9a3437 req-9a0d42a2-14bc-4ccb-82c2-15e0c73d321b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.938 233728 DEBUG oslo_concurrency.lockutils [req-4dae6a14-f71b-4827-ae93-8a707a9a3437 req-9a0d42a2-14bc-4ccb-82c2-15e0c73d321b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.938 233728 DEBUG oslo_concurrency.lockutils [req-4dae6a14-f71b-4827-ae93-8a707a9a3437 req-9a0d42a2-14bc-4ccb-82c2-15e0c73d321b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.939 233728 DEBUG nova.compute.manager [req-4dae6a14-f71b-4827-ae93-8a707a9a3437 req-9a0d42a2-14bc-4ccb-82c2-15e0c73d321b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] No waiting events found dispatching network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:24 np0005539552 nova_compute[233724]: 2025-11-29 08:08:24.939 233728 WARNING nova.compute.manager [req-4dae6a14-f71b-4827-ae93-8a707a9a3437 req-9a0d42a2-14bc-4ccb-82c2-15e0c73d321b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received unexpected event network-vif-plugged-da77bd23-8c00-4b8b-b4a6-7520c77f0352 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:08:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:25.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/199928673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.170 233728 DEBUG oslo_concurrency.processutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.176 233728 DEBUG nova.compute.provider_tree [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.191 233728 DEBUG nova.scheduler.client.report [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.211 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.236 233728 INFO nova.scheduler.client.report [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Deleted allocations for instance 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.303 233728 DEBUG oslo_concurrency.lockutils [None req-ff9ffba5-5f74-4271-ae46-d3d98dc518ef 4d0fdc99d9764df9a0868625cd14f49a b6b2c950ad924f9c9d1a44696045a508 - - default default] Lock "7ee7bf74-a9de-45ba-bfca-1be4702ce6e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.330 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.771 233728 DEBUG nova.compute.manager [req-f39b00a0-a838-49fe-b98d-c6cd0f0dece5 req-141ada30-55c3-485a-a676-4a5183e71951 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Received event network-vif-deleted-da77bd23-8c00-4b8b-b4a6-7520c77f0352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:08:25 np0005539552 nova_compute[233724]: 2025-11-29 08:08:25.952 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:08:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 03:08:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:26.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:26 np0005539552 nova_compute[233724]: 2025-11-29 08:08:26.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:27.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:27 np0005539552 nova_compute[233724]: 2025-11-29 08:08:27.229 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:27 np0005539552 nova_compute[233724]: 2025-11-29 08:08:27.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:27 np0005539552 nova_compute[233724]: 2025-11-29 08:08:27.631 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:29 np0005539552 nova_compute[233724]: 2025-11-29 08:08:29.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:30 np0005539552 nova_compute[233724]: 2025-11-29 08:08:30.416 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.866103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710866447, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2631, "num_deletes": 270, "total_data_size": 5737673, "memory_usage": 5822560, "flush_reason": "Manual Compaction"}
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710926696, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3744075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35752, "largest_seqno": 38378, "table_properties": {"data_size": 3733166, "index_size": 7019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23617, "raw_average_key_size": 21, "raw_value_size": 3711127, "raw_average_value_size": 3370, "num_data_blocks": 300, "num_entries": 1101, "num_filter_entries": 1101, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403542, "oldest_key_time": 1764403542, "file_creation_time": 1764403710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 60380 microseconds, and 7771 cpu microseconds.
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.926737) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3744075 bytes OK
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.926755) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.963083) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.963123) EVENT_LOG_v1 {"time_micros": 1764403710963115, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.963144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5725826, prev total WAL file size 5725826, number of live WAL files 2.
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.964663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3656KB)], [69(8700KB)]
Nov 29 03:08:30 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710964703, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 12653078, "oldest_snapshot_seqno": -1}
Nov 29 03:08:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6920 keys, 10686526 bytes, temperature: kUnknown
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403711121987, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 10686526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10639803, "index_size": 28307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178773, "raw_average_key_size": 25, "raw_value_size": 10515029, "raw_average_value_size": 1519, "num_data_blocks": 1124, "num_entries": 6920, "num_filter_entries": 6920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.122262) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10686526 bytes
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.125403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.4 rd, 67.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 7464, records dropped: 544 output_compression: NoCompression
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.125427) EVENT_LOG_v1 {"time_micros": 1764403711125417, "job": 42, "event": "compaction_finished", "compaction_time_micros": 157384, "compaction_time_cpu_micros": 27410, "output_level": 6, "num_output_files": 1, "total_output_size": 10686526, "num_input_records": 7464, "num_output_records": 6920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403711126246, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403711128245, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:30.964550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.128345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.128350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.128352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.128354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:08:31.128356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:32 np0005539552 nova_compute[233724]: 2025-11-29 08:08:32.077 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:32 np0005539552 nova_compute[233724]: 2025-11-29 08:08:32.231 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:32 np0005539552 nova_compute[233724]: 2025-11-29 08:08:32.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:32 np0005539552 nova_compute[233724]: 2025-11-29 08:08:32.880 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:34.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:35.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:35 np0005539552 nova_compute[233724]: 2025-11-29 08:08:35.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:36.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:37.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:37 np0005539552 nova_compute[233724]: 2025-11-29 08:08:37.154 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403702.1527991, 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:37 np0005539552 nova_compute[233724]: 2025-11-29 08:08:37.154 233728 INFO nova.compute.manager [-] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:37 np0005539552 nova_compute[233724]: 2025-11-29 08:08:37.186 233728 DEBUG nova.compute.manager [None req-546698dd-8e0f-4bfb-9092-2a4108d06d49 - - - - - -] [instance: 7ee7bf74-a9de-45ba-bfca-1be4702ce6e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:37 np0005539552 nova_compute[233724]: 2025-11-29 08:08:37.234 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:08:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3044734865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:08:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:08:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3044734865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:08:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:39.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:40 np0005539552 nova_compute[233724]: 2025-11-29 08:08:40.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:41.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:42 np0005539552 nova_compute[233724]: 2025-11-29 08:08:42.238 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:43.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:44.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:45 np0005539552 nova_compute[233724]: 2025-11-29 08:08:45.472 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:46.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:47 np0005539552 nova_compute[233724]: 2025-11-29 08:08:47.291 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:48.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:49.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:50 np0005539552 nova_compute[233724]: 2025-11-29 08:08:50.473 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:50 np0005539552 podman[262494]: 2025-11-29 08:08:50.968556141 +0000 UTC m=+0.055865539 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:50 np0005539552 podman[262493]: 2025-11-29 08:08:50.969010313 +0000 UTC m=+0.056781644 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:51 np0005539552 podman[262495]: 2025-11-29 08:08:51.000650267 +0000 UTC m=+0.084883602 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:08:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:51.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:52 np0005539552 nova_compute[233724]: 2025-11-29 08:08:52.293 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:52.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:53.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:55.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:55 np0005539552 nova_compute[233724]: 2025-11-29 08:08:55.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:56.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:57 np0005539552 nova_compute[233724]: 2025-11-29 08:08:57.297 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:57 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 29 03:08:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:58.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:08:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:00 np0005539552 nova_compute[233724]: 2025-11-29 08:09:00.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:01.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:02 np0005539552 nova_compute[233724]: 2025-11-29 08:09:02.300 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:03.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:05.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:05 np0005539552 nova_compute[233724]: 2025-11-29 08:09:05.552 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:06.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:07 np0005539552 nova_compute[233724]: 2025-11-29 08:09:07.304 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:08.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:09.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:10 np0005539552 nova_compute[233724]: 2025-11-29 08:09:10.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:10.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:11.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:12 np0005539552 nova_compute[233724]: 2025-11-29 08:09:12.307 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:12.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:13 np0005539552 podman[262848]: 2025-11-29 08:09:13.257877649 +0000 UTC m=+0.056096456 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:09:13 np0005539552 podman[262848]: 2025-11-29 08:09:13.360037996 +0000 UTC m=+0.158256783 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:13 np0005539552 podman[263004]: 2025-11-29 08:09:13.974164314 +0000 UTC m=+0.079459336 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:09:13 np0005539552 podman[263004]: 2025-11-29 08:09:13.987036162 +0000 UTC m=+0.092331164 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:09:14 np0005539552 podman[263072]: 2025-11-29 08:09:14.197794701 +0000 UTC m=+0.048230733 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Nov 29 03:09:14 np0005539552 podman[263072]: 2025-11-29 08:09:14.210021101 +0000 UTC m=+0.060457133 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20)
Nov 29 03:09:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:14.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:14 np0005539552 nova_compute[233724]: 2025-11-29 08:09:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:14 np0005539552 nova_compute[233724]: 2025-11-29 08:09:14.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:09:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:15.336 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:15.337 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.337 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.554 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.836 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.836 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.860 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:09:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:15 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.950 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.950 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.959 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:15 np0005539552 nova_compute[233724]: 2025-11-29 08:09:15.960 233728 INFO nova.compute.claims [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.071 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3581950375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.502 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.509 233728 DEBUG nova.compute.provider_tree [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.524 233728 DEBUG nova.scheduler.client.report [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.549 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.550 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.599 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.599 233728 DEBUG nova.network.neutron [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.624 233728 INFO nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.661 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.826 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.828 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.828 233728 INFO nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Creating image(s)#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.857 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.891 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.922 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.926 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.989 233728 DEBUG nova.policy [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ab0114aca6149af994da2b9052c1368', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8384e5887c0948f5876c019d50057152', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.991 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.992 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.992 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539552 nova_compute[233724]: 2025-11-29 08:09:16.993 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.016 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.020 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:17.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.297 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.328 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.364 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.460 233728 DEBUG nova.objects.instance [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.476 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.476 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Ensure instance console log exists: /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.477 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.477 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:17 np0005539552 nova_compute[233724]: 2025-11-29 08:09:17.477 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539552 nova_compute[233724]: 2025-11-29 08:09:18.249 233728 DEBUG nova.network.neutron [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Successfully created port: 2c5188d4-f3c0-4374-9952-360d1ab07a47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.567 233728 DEBUG nova.network.neutron [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Successfully updated port: 2c5188d4-f3c0-4374-9952-360d1ab07a47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.588 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.588 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.589 233728 DEBUG nova.network.neutron [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.657 233728 DEBUG nova.compute.manager [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-changed-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.657 233728 DEBUG nova.compute.manager [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Refreshing instance network info cache due to event network-changed-2c5188d4-f3c0-4374-9952-360d1ab07a47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.658 233728 DEBUG oslo_concurrency.lockutils [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:19 np0005539552 nova_compute[233724]: 2025-11-29 08:09:19.791 233728 DEBUG nova.network.neutron [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.557 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:20.617 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:20.617 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:20.618 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:20.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.943 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.974 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.975 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.975 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.975 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:09:20 np0005539552 nova_compute[233724]: 2025-11-29 08:09:20.976 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.048 233728 DEBUG nova.network.neutron [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Updating instance_info_cache with network_info: [{"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.068 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.068 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance network_info: |[{"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.068 233728 DEBUG oslo_concurrency.lockutils [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.069 233728 DEBUG nova.network.neutron [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Refreshing network info cache for port 2c5188d4-f3c0-4374-9952-360d1ab07a47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.071 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Start _get_guest_xml network_info=[{"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.075 233728 WARNING nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.079 233728 DEBUG nova.virt.libvirt.host [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.080 233728 DEBUG nova.virt.libvirt.host [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.082 233728 DEBUG nova.virt.libvirt.host [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.082 233728 DEBUG nova.virt.libvirt.host [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.083 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.084 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.084 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.084 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.084 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.085 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.085 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.085 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.085 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.085 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.086 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.086 233728 DEBUG nova.virt.hardware [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.089 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:21.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2216565402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.452 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4143946727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:21 np0005539552 podman[263497]: 2025-11-29 08:09:21.550470479 +0000 UTC m=+0.062319583 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:09:21 np0005539552 podman[263496]: 2025-11-29 08:09:21.554333943 +0000 UTC m=+0.065804317 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.563 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:21 np0005539552 podman[263498]: 2025-11-29 08:09:21.577840078 +0000 UTC m=+0.087816062 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.591 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.595 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.720 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.722 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4552MB free_disk=20.952682495117188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.722 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.723 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.813 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 9a195147-a644-4352-b5a5-4e81a4ee7d4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.814 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.815 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:09:21 np0005539552 nova_compute[233724]: 2025-11-29 08:09:21.909 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2505308844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.103 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.105 233728 DEBUG nova.virt.libvirt.vif [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-688009273',display_name='tempest-ServerDiskConfigTestJSON-server-688009273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-688009273',id=75,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-xcthga44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:16Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=9a195147-a644-4352-b5a5-4e81a4ee7d4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.105 233728 DEBUG nova.network.os_vif_util [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.106 233728 DEBUG nova.network.os_vif_util [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.107 233728 DEBUG nova.objects.instance [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.137 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <uuid>9a195147-a644-4352-b5a5-4e81a4ee7d4c</uuid>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <name>instance-0000004b</name>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-688009273</nova:name>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:09:21</nova:creationTime>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <nova:port uuid="2c5188d4-f3c0-4374-9952-360d1ab07a47">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <entry name="serial">9a195147-a644-4352-b5a5-4e81a4ee7d4c</entry>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <entry name="uuid">9a195147-a644-4352-b5a5-4e81a4ee7d4c</entry>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b6:6d:ff"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <target dev="tap2c5188d4-f3"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/console.log" append="off"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:09:22 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:09:22 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:09:22 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:09:22 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.143 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Preparing to wait for external event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.143 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.144 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.144 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.145 233728 DEBUG nova.virt.libvirt.vif [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-688009273',display_name='tempest-ServerDiskConfigTestJSON-server-688009273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-688009273',id=75,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-xcthga44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:16Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=9a195147-a644-4352-b5a5-4e81a4ee7d4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.145 233728 DEBUG nova.network.os_vif_util [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.146 233728 DEBUG nova.network.os_vif_util [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.147 233728 DEBUG os_vif [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.148 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.148 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.152 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c5188d4-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.152 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c5188d4-f3, col_values=(('external_ids', {'iface-id': '2c5188d4-f3c0-4374-9952-360d1ab07a47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:6d:ff', 'vm-uuid': '9a195147-a644-4352-b5a5-4e81a4ee7d4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.154 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539552 NetworkManager[48926]: <info>  [1764403762.1553] manager: (tap2c5188d4-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.156 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.162 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.163 233728 INFO os_vif [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3')#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.225 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.226 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.227 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:b6:6d:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.227 233728 INFO nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Using config drive#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.252 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2877588959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.337 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.342 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.365 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.385 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.385 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.588 233728 INFO nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Creating config drive at /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.596 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptd9ae44y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.644 233728 DEBUG nova.network.neutron [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Updated VIF entry in instance network info cache for port 2c5188d4-f3c0-4374-9952-360d1ab07a47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.645 233728 DEBUG nova.network.neutron [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Updating instance_info_cache with network_info: [{"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.674 233728 DEBUG oslo_concurrency.lockutils [req-9bfc5aa4-98b8-43c8-84cc-1e257f9aed63 req-d6b7d020-2f02-45fd-bb3f-d277bbb96665 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.728 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptd9ae44y" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.753 233728 DEBUG nova.storage.rbd_utils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.757 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:22.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.904 233728 DEBUG oslo_concurrency.processutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.905 233728 INFO nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deleting local config drive /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:22 np0005539552 kernel: tap2c5188d4-f3: entered promiscuous mode
Nov 29 03:09:22 np0005539552 NetworkManager[48926]: <info>  [1764403762.9498] manager: (tap2c5188d4-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 03:09:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:22Z|00217|binding|INFO|Claiming lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 for this chassis.
Nov 29 03:09:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:22Z|00218|binding|INFO|2c5188d4-f3c0-4374-9952-360d1ab07a47: Claiming fa:16:3e:b6:6d:ff 10.100.0.10
Nov 29 03:09:22 np0005539552 nova_compute[233724]: 2025-11-29 08:09:22.950 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.964 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:6d:ff 10.100.0.10'], port_security=['fa:16:3e:b6:6d:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9a195147-a644-4352-b5a5-4e81a4ee7d4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2c5188d4-f3c0-4374-9952-360d1ab07a47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.964 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2c5188d4-f3c0-4374-9952-360d1ab07a47 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.966 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.978 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d22cef-f755-4d82-81bd-772fad38d439]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.979 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:22 np0005539552 systemd-udevd[263719]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.983 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.983 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d8faa366-f985-4bd3-92db-3e5c65c248d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.985 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8b73e981-c492-4d7f-9513-f6362ca83e3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539552 systemd-machined[196379]: New machine qemu-26-instance-0000004b.
Nov 29 03:09:22 np0005539552 NetworkManager[48926]: <info>  [1764403762.9941] device (tap2c5188d4-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:22 np0005539552 NetworkManager[48926]: <info>  [1764403762.9952] device (tap2c5188d4-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:22.997 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb54f03-f7d4-4ead-8c7b-f21cb0bb9956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.021 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5292cabe-0d1a-49bd-9e6b-971059be3cc0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 systemd[1]: Started Virtual Machine qemu-26-instance-0000004b.
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.050 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c9002d-78ce-40ac-afd0-873f948d50c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.052 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:23Z|00219|binding|INFO|Setting lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 ovn-installed in OVS
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.056 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:23Z|00220|binding|INFO|Setting lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 up in Southbound
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.054 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc59904-b015-4921-842c-37034dbe290d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 NetworkManager[48926]: <info>  [1764403763.0574] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.086 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce3da35-33f8-443e-8611-efb571ca3bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.089 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa3565c-7e93-4816-9323-8052ce039fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 NetworkManager[48926]: <info>  [1764403763.1118] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.115 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[45efe736-4b88-4327-ab6a-c79e6ed1377a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.131 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3f432566-7a14-4549-8e95-d25448096300]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680879, 'reachable_time': 19929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263752, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:23.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.148 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c18d86eb-08a8-4cd9-b98a-f361f62aa339]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680879, 'tstamp': 680879}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263753, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.164 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[17f4c68b-aac1-47e0-8f8e-ddae6d867bae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680879, 'reachable_time': 19929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263754, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.199 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2d273-4c8c-48cc-847a-07de3f3f986f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.267 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6786eb-e2d2-4228-b3db-00baf6f872fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.269 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.269 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.270 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.271 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539552 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.274 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:23Z|00221|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:09:23 np0005539552 NetworkManager[48926]: <info>  [1764403763.2763] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.277 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.291 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.292 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.293 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d75238d4-2581-432f-922a-7f326dab0011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.293 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:23.294 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.365 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.366 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.366 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:09:23 np0005539552 podman[263786]: 2025-11-29 08:09:23.644486094 +0000 UTC m=+0.048947332 container create fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:09:23 np0005539552 systemd[1]: Started libpod-conmon-fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a.scope.
Nov 29 03:09:23 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:09:23 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed630cc296eb4791684140951c177e151eff88f517d8584c6587dfcdc6dd68e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:23 np0005539552 podman[263786]: 2025-11-29 08:09:23.614772682 +0000 UTC m=+0.019233940 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:23 np0005539552 podman[263786]: 2025-11-29 08:09:23.71768061 +0000 UTC m=+0.122141868 container init fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.717 233728 DEBUG nova.compute.manager [req-07e08e81-c138-4a4a-9ca7-dc1bcb55deef req-5a13fd66-0b9a-472a-9166-7d482d83b188 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.719 233728 DEBUG oslo_concurrency.lockutils [req-07e08e81-c138-4a4a-9ca7-dc1bcb55deef req-5a13fd66-0b9a-472a-9166-7d482d83b188 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.719 233728 DEBUG oslo_concurrency.lockutils [req-07e08e81-c138-4a4a-9ca7-dc1bcb55deef req-5a13fd66-0b9a-472a-9166-7d482d83b188 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.719 233728 DEBUG oslo_concurrency.lockutils [req-07e08e81-c138-4a4a-9ca7-dc1bcb55deef req-5a13fd66-0b9a-472a-9166-7d482d83b188 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.720 233728 DEBUG nova.compute.manager [req-07e08e81-c138-4a4a-9ca7-dc1bcb55deef req-5a13fd66-0b9a-472a-9166-7d482d83b188 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Processing event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:23 np0005539552 podman[263786]: 2025-11-29 08:09:23.723790805 +0000 UTC m=+0.128252043 container start fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:09:23 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [NOTICE]   (263820) : New worker (263825) forked
Nov 29 03:09:23 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [NOTICE]   (263820) : Loading success.
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.884 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403763.8843336, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.885 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.887 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.890 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.893 233728 INFO nova.virt.libvirt.driver [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance spawned successfully.#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.893 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.914 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.919 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.922 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.922 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.923 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.923 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.923 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.924 233728 DEBUG nova.virt.libvirt.driver [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.926 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.949 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.950 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403763.884494, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.950 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.974 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.976 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403763.8895984, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:23 np0005539552 nova_compute[233724]: 2025-11-29 08:09:23.976 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.004 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.007 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.022 233728 INFO nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Took 7.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.022 233728 DEBUG nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.033 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.081 233728 INFO nova.compute.manager [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Took 8.16 seconds to build instance.#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.097 233728 DEBUG oslo_concurrency.lockutils [None req-a1cf83f9-c5c8-4bde-b195-5bfa198f0a32 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:24.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:24 np0005539552 nova_compute[233724]: 2025-11-29 08:09:24.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:25.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:25.339 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.841 233728 DEBUG nova.compute.manager [req-1a06349c-4759-4cc7-8ed0-ac79f23b179d req-ea08e5ad-a4f7-4900-b451-1b832368e09f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.841 233728 DEBUG oslo_concurrency.lockutils [req-1a06349c-4759-4cc7-8ed0-ac79f23b179d req-ea08e5ad-a4f7-4900-b451-1b832368e09f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.841 233728 DEBUG oslo_concurrency.lockutils [req-1a06349c-4759-4cc7-8ed0-ac79f23b179d req-ea08e5ad-a4f7-4900-b451-1b832368e09f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.842 233728 DEBUG oslo_concurrency.lockutils [req-1a06349c-4759-4cc7-8ed0-ac79f23b179d req-ea08e5ad-a4f7-4900-b451-1b832368e09f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.842 233728 DEBUG nova.compute.manager [req-1a06349c-4759-4cc7-8ed0-ac79f23b179d req-ea08e5ad-a4f7-4900-b451-1b832368e09f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] No waiting events found dispatching network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:25 np0005539552 nova_compute[233724]: 2025-11-29 08:09:25.842 233728 WARNING nova.compute.manager [req-1a06349c-4759-4cc7-8ed0-ac79f23b179d req-ea08e5ad-a4f7-4900-b451-1b832368e09f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received unexpected event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:26.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:26 np0005539552 nova_compute[233724]: 2025-11-29 08:09:26.945 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:27.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.156 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.943 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:09:27 np0005539552 nova_compute[233724]: 2025-11-29 08:09:27.944 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.371 233728 INFO nova.compute.manager [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Rebuilding instance#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.611 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.628 233728 DEBUG nova.compute.manager [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.674 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.692 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.708 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.722 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.744 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:09:28 np0005539552 nova_compute[233724]: 2025-11-29 08:09:28.752 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:09:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:28.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:29 np0005539552 nova_compute[233724]: 2025-11-29 08:09:29.252 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Updating instance_info_cache with network_info: [{"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:29 np0005539552 nova_compute[233724]: 2025-11-29 08:09:29.274 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-9a195147-a644-4352-b5a5-4e81a4ee7d4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:29 np0005539552 nova_compute[233724]: 2025-11-29 08:09:29.275 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:09:30 np0005539552 nova_compute[233724]: 2025-11-29 08:09:30.562 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:30.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:31.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:32 np0005539552 nova_compute[233724]: 2025-11-29 08:09:32.159 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:32.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:33.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:34.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:35.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:35 np0005539552 nova_compute[233724]: 2025-11-29 08:09:35.563 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:35 np0005539552 nova_compute[233724]: 2025-11-29 08:09:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:35 np0005539552 nova_compute[233724]: 2025-11-29 08:09:35.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:09:35 np0005539552 nova_compute[233724]: 2025-11-29 08:09:35.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:09:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:36.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:36 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:6d:ff 10.100.0.10
Nov 29 03:09:36 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:6d:ff 10.100.0.10
Nov 29 03:09:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246004814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:37 np0005539552 nova_compute[233724]: 2025-11-29 08:09:37.191 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981558990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:38 np0005539552 nova_compute[233724]: 2025-11-29 08:09:38.797 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:09:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:38.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:09:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1717802514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:09:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:09:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1717802514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:09:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/866512526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:40 np0005539552 nova_compute[233724]: 2025-11-29 08:09:40.565 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:40.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:41 np0005539552 kernel: tap2c5188d4-f3 (unregistering): left promiscuous mode
Nov 29 03:09:41 np0005539552 NetworkManager[48926]: <info>  [1764403781.1190] device (tap2c5188d4-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.123 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:41Z|00222|binding|INFO|Releasing lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 from this chassis (sb_readonly=0)
Nov 29 03:09:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:41Z|00223|binding|INFO|Setting lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 down in Southbound
Nov 29 03:09:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:41Z|00224|binding|INFO|Removing iface tap2c5188d4-f3 ovn-installed in OVS
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.130 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:6d:ff 10.100.0.10'], port_security=['fa:16:3e:b6:6d:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9a195147-a644-4352-b5a5-4e81a4ee7d4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2c5188d4-f3c0-4374-9952-360d1ab07a47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.132 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2c5188d4-f3c0-4374-9952-360d1ab07a47 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.135 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.137 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7d2787-42ec-46a6-9646-40298f22d05d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.139 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 29 03:09:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:41 np0005539552 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000004b.scope: Consumed 14.163s CPU time.
Nov 29 03:09:41 np0005539552 systemd-machined[196379]: Machine qemu-26-instance-0000004b terminated.
Nov 29 03:09:41 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [NOTICE]   (263820) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:41 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [NOTICE]   (263820) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:41 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [WARNING]  (263820) : Exiting Master process...
Nov 29 03:09:41 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [ALERT]    (263820) : Current worker (263825) exited with code 143 (Terminated)
Nov 29 03:09:41 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[263801]: [WARNING]  (263820) : All workers exited. Exiting... (0)
Nov 29 03:09:41 np0005539552 systemd[1]: libpod-fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a.scope: Deactivated successfully.
Nov 29 03:09:41 np0005539552 podman[263944]: 2025-11-29 08:09:41.277409358 +0000 UTC m=+0.041859691 container died fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:09:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay-4ed630cc296eb4791684140951c177e151eff88f517d8584c6587dfcdc6dd68e-merged.mount: Deactivated successfully.
Nov 29 03:09:41 np0005539552 podman[263944]: 2025-11-29 08:09:41.316167574 +0000 UTC m=+0.080617937 container cleanup fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:09:41 np0005539552 systemd[1]: libpod-conmon-fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a.scope: Deactivated successfully.
Nov 29 03:09:41 np0005539552 podman[263972]: 2025-11-29 08:09:41.382074623 +0000 UTC m=+0.047719799 container remove fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.389 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[418c028f-9e58-46fb-8ed0-24e799d9e35f]: (4, ('Sat Nov 29 08:09:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a)\nfe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a\nSat Nov 29 08:09:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (fe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a)\nfe464768ced1ceab00f390581753c673735ff746284a07b8ce50763a71f7342a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.392 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[deb81602-ad75-45ab-95b2-16d191bf560b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.393 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.395 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.414 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.417 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac205bcc-06e6-4eed-b065-3daf0fa1550b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.431 233728 DEBUG nova.compute.manager [req-2a8e39af-abc2-4e23-9a51-da5b29cebf43 req-497c293f-89f2-4e39-88e5-ffd3f405f68d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-unplugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.432 233728 DEBUG oslo_concurrency.lockutils [req-2a8e39af-abc2-4e23-9a51-da5b29cebf43 req-497c293f-89f2-4e39-88e5-ffd3f405f68d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.432 233728 DEBUG oslo_concurrency.lockutils [req-2a8e39af-abc2-4e23-9a51-da5b29cebf43 req-497c293f-89f2-4e39-88e5-ffd3f405f68d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.432 233728 DEBUG oslo_concurrency.lockutils [req-2a8e39af-abc2-4e23-9a51-da5b29cebf43 req-497c293f-89f2-4e39-88e5-ffd3f405f68d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.432 233728 DEBUG nova.compute.manager [req-2a8e39af-abc2-4e23-9a51-da5b29cebf43 req-497c293f-89f2-4e39-88e5-ffd3f405f68d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] No waiting events found dispatching network-vif-unplugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.433 233728 WARNING nova.compute.manager [req-2a8e39af-abc2-4e23-9a51-da5b29cebf43 req-497c293f-89f2-4e39-88e5-ffd3f405f68d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received unexpected event network-vif-unplugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.438 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6daff7c5-3b5b-4d40-a58a-2d0b81fe3ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.439 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bdfd78-1936-4634-b6ca-b8fa3231b7cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.458 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[40b60a23-1117-4c95-8924-1523cc503f96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680873, 'reachable_time': 36102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264001, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.462 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:41.462 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[31bd5f79-dbc7-4bc7-ac28-ce7df06561d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:41 np0005539552 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.813 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.823 233728 INFO nova.virt.libvirt.driver [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance destroyed successfully.#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.830 233728 INFO nova.virt.libvirt.driver [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance destroyed successfully.#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.832 233728 DEBUG nova.virt.libvirt.vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-688009273',display_name='tempest-ServerDiskConfigTestJSON-server-688009273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-688009273',id=75,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-xcthga44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:27Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=9a195147-a644-4352-b5a5-4e81a4ee7d4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.833 233728 DEBUG nova.network.os_vif_util [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.834 233728 DEBUG nova.network.os_vif_util [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.835 233728 DEBUG os_vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.838 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.839 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c5188d4-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.841 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539552 nova_compute[233724]: 2025-11-29 08:09:41.847 233728 INFO os_vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3')#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.321 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deleting instance files /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c_del#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.322 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deletion of /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c_del complete#033[00m
Nov 29 03:09:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.471 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.472 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Creating image(s)#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.494 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.520 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.544 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.548 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.549 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:42 np0005539552 nova_compute[233724]: 2025-11-29 08:09:42.799 233728 DEBUG nova.virt.libvirt.imagebackend [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/93eccffb-bacd-407f-af6f-64451dee7b21/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/93eccffb-bacd-407f-af6f-64451dee7b21/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:09:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:42.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:43.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.678 233728 DEBUG nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.679 233728 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.679 233728 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.679 233728 DEBUG oslo_concurrency.lockutils [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.679 233728 DEBUG nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] No waiting events found dispatching network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.679 233728 WARNING nova.compute.manager [req-5f46f088-1c43-458d-9c93-9b0a844c92bf req-c6d6662b-9e3d-4164-9196-fbe063b60b62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received unexpected event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.813 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.881 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.882 233728 DEBUG nova.virt.images [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] 93eccffb-bacd-407f-af6f-64451dee7b21 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.883 233728 DEBUG nova.privsep.utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 03:09:43 np0005539552 nova_compute[233724]: 2025-11-29 08:09:43.883 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.073 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.078 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.142 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.144 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.168 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.171 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.452 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.521 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.626 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.626 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Ensure instance console log exists: /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.627 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.627 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.627 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.630 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Start _get_guest_xml network_info=[{"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.636 233728 WARNING nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.642 233728 DEBUG nova.virt.libvirt.host [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.643 233728 DEBUG nova.virt.libvirt.host [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.646 233728 DEBUG nova.virt.libvirt.host [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.646 233728 DEBUG nova.virt.libvirt.host [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.647 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.648 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.648 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.648 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.649 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.649 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.649 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.650 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.650 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.650 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.650 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.651 233728 DEBUG nova.virt.hardware [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.651 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:44 np0005539552 nova_compute[233724]: 2025-11-29 08:09:44.674 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:44.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3716554799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.125 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.153 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.156 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/518459681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.629 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.630 233728 DEBUG nova.virt.libvirt.vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-688009273',display_name='tempest-ServerDiskConfigTestJSON-server-688009273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-688009273',id=75,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-xcthga44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:42Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=9a195147-a644-4352-b5a5-4e81a4ee7d4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.631 233728 DEBUG nova.network.os_vif_util [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.632 233728 DEBUG nova.network.os_vif_util [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.634 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <uuid>9a195147-a644-4352-b5a5-4e81a4ee7d4c</uuid>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <name>instance-0000004b</name>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-688009273</nova:name>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:09:44</nova:creationTime>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <nova:port uuid="2c5188d4-f3c0-4374-9952-360d1ab07a47">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <entry name="serial">9a195147-a644-4352-b5a5-4e81a4ee7d4c</entry>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <entry name="uuid">9a195147-a644-4352-b5a5-4e81a4ee7d4c</entry>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b6:6d:ff"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <target dev="tap2c5188d4-f3"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/console.log" append="off"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:09:45 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:09:45 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:09:45 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:09:45 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.635 233728 DEBUG nova.compute.manager [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Preparing to wait for external event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.635 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.636 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.636 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.636 233728 DEBUG nova.virt.libvirt.vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-688009273',display_name='tempest-ServerDiskConfigTestJSON-server-688009273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-688009273',id=75,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-xcthga44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:42Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=9a195147-a644-4352-b5a5-4e81a4ee7d4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.637 233728 DEBUG nova.network.os_vif_util [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.637 233728 DEBUG nova.network.os_vif_util [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.637 233728 DEBUG os_vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.638 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.638 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.639 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.640 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.641 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c5188d4-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.641 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c5188d4-f3, col_values=(('external_ids', {'iface-id': '2c5188d4-f3c0-4374-9952-360d1ab07a47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:6d:ff', 'vm-uuid': '9a195147-a644-4352-b5a5-4e81a4ee7d4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.642 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:45 np0005539552 NetworkManager[48926]: <info>  [1764403785.6432] manager: (tap2c5188d4-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.644 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.648 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.649 233728 INFO os_vif [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3')#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.696 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.697 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.697 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:b6:6d:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.697 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Using config drive#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.720 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.744 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:45 np0005539552 nova_compute[233724]: 2025-11-29 08:09:45.784 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'keypairs' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.245 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Creating config drive at /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.252 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bxbes2n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.390 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9bxbes2n" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.432 233728 DEBUG nova.storage.rbd_utils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.437 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.703 233728 DEBUG oslo_concurrency.processutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config 9a195147-a644-4352-b5a5-4e81a4ee7d4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.704 233728 INFO nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deleting local config drive /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:46 np0005539552 kernel: tap2c5188d4-f3: entered promiscuous mode
Nov 29 03:09:46 np0005539552 NetworkManager[48926]: <info>  [1764403786.7554] manager: (tap2c5188d4-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 03:09:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:46Z|00225|binding|INFO|Claiming lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 for this chassis.
Nov 29 03:09:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:46Z|00226|binding|INFO|2c5188d4-f3c0-4374-9952-360d1ab07a47: Claiming fa:16:3e:b6:6d:ff 10.100.0.10
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.758 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.766 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:6d:ff 10.100.0.10'], port_security=['fa:16:3e:b6:6d:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9a195147-a644-4352-b5a5-4e81a4ee7d4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2c5188d4-f3c0-4374-9952-360d1ab07a47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.767 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2c5188d4-f3c0-4374-9952-360d1ab07a47 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.769 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:09:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:46Z|00227|binding|INFO|Setting lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 ovn-installed in OVS
Nov 29 03:09:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:46Z|00228|binding|INFO|Setting lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 up in Southbound
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.776 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:46 np0005539552 nova_compute[233724]: 2025-11-29 08:09:46.779 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.780 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e90424be-d04c-4d41-9b5d-4b29804cf96c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.781 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.783 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.783 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8bedf2-4928-4b86-a478-1bfe71228a50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.784 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[497e843b-095e-483c-b6fa-3b219f72ecc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 systemd-udevd[264341]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:46 np0005539552 systemd-machined[196379]: New machine qemu-27-instance-0000004b.
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.796 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[13475118-489c-4d15-8be6-9e741a2992bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 NetworkManager[48926]: <info>  [1764403786.8022] device (tap2c5188d4-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:46 np0005539552 NetworkManager[48926]: <info>  [1764403786.8033] device (tap2c5188d4-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:46 np0005539552 systemd[1]: Started Virtual Machine qemu-27-instance-0000004b.
Nov 29 03:09:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:46.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.822 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0952ed4e-20b3-44b4-a06a-f2ebd64ce91f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.850 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1169e3bf-04ff-48fe-9738-e54a986398b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 NetworkManager[48926]: <info>  [1764403786.8557] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.855 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7a51d3f5-5f13-4749-8a6b-60f24849013e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.883 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8879e234-85b2-40f0-94be-58bb91a11c8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.885 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb07420-7ff3-4b6d-8eaf-62640eea31fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 NetworkManager[48926]: <info>  [1764403786.9043] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.908 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec622dc-3220-4fe8-a9b8-d08eccf14e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.922 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[52c5daca-490a-4064-9299-b08f9f60dff0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683259, 'reachable_time': 31510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264373, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.936 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e36d7595-d806-47fb-81a9-66aebd566501]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683259, 'tstamp': 683259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264374, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.950 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d90ddca3-5cbd-412b-b884-5d1118a64f8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683259, 'reachable_time': 31510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264375, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:46.978 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d32c778f-83b8-433e-aa97-7ca15b786404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.028 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4e59e511-98bd-4984-ad93-dff36b44a91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.029 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.029 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.030 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.031 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:47 np0005539552 NetworkManager[48926]: <info>  [1764403787.0322] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 03:09:47 np0005539552 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.037 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:47Z|00229|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.039 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.039 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8d88fa67-2cf9-4430-94ef-1a0712d9ab92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.040 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:47.040 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.052 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.160 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 9a195147-a644-4352-b5a5-4e81a4ee7d4c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.161 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403787.160086, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.161 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.180 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.184 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403787.1611633, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.185 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.209 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.213 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.231 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:09:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:47 np0005539552 podman[264450]: 2025-11-29 08:09:47.423287249 +0000 UTC m=+0.086748663 container create fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:09:47 np0005539552 podman[264450]: 2025-11-29 08:09:47.382437876 +0000 UTC m=+0.045899360 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:47 np0005539552 systemd[1]: Started libpod-conmon-fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc.scope.
Nov 29 03:09:47 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:09:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6fbdac96e178a378fce6d631885baa2c722f655ee9249edd61dd8e68355631/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:47 np0005539552 podman[264450]: 2025-11-29 08:09:47.534559773 +0000 UTC m=+0.198021197 container init fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:09:47 np0005539552 podman[264450]: 2025-11-29 08:09:47.539536467 +0000 UTC m=+0.202997861 container start fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:09:47 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [NOTICE]   (264469) : New worker (264471) forked
Nov 29 03:09:47 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [NOTICE]   (264469) : Loading success.
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.918 233728 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.919 233728 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.919 233728 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.920 233728 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.920 233728 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Processing event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.920 233728 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.920 233728 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.920 233728 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.921 233728 DEBUG oslo_concurrency.lockutils [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.921 233728 DEBUG nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] No waiting events found dispatching network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.921 233728 WARNING nova.compute.manager [req-00dc17f6-b4d4-41ea-a15d-8ae624f8eab3 req-9adb411f-d1a3-4281-ae9a-e8895fb124b4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received unexpected event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.922 233728 DEBUG nova.compute.manager [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.925 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403787.924761, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.925 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.927 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.930 233728 INFO nova.virt.libvirt.driver [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance spawned successfully.#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.931 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.978 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.980 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.980 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.981 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.981 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.981 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.982 233728 DEBUG nova.virt.libvirt.driver [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:47 np0005539552 nova_compute[233724]: 2025-11-29 08:09:47.986 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:48 np0005539552 nova_compute[233724]: 2025-11-29 08:09:48.033 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:09:48 np0005539552 nova_compute[233724]: 2025-11-29 08:09:48.056 233728 DEBUG nova.compute.manager [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:48 np0005539552 nova_compute[233724]: 2025-11-29 08:09:48.143 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:48 np0005539552 nova_compute[233724]: 2025-11-29 08:09:48.144 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:48 np0005539552 nova_compute[233724]: 2025-11-29 08:09:48.144 233728 DEBUG nova.objects.instance [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:09:48 np0005539552 nova_compute[233724]: 2025-11-29 08:09:48.203 233728 DEBUG oslo_concurrency.lockutils [None req-23daf3d0-1aac-4e3c-b940-de7f321bc907 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:49.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:50 np0005539552 nova_compute[233724]: 2025-11-29 08:09:50.637 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539552 nova_compute[233724]: 2025-11-29 08:09:50.642 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:09:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:09:51 np0005539552 podman[264483]: 2025-11-29 08:09:51.724845916 +0000 UTC m=+0.052532289 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:09:51 np0005539552 podman[264482]: 2025-11-29 08:09:51.726675375 +0000 UTC m=+0.056485926 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 03:09:51 np0005539552 podman[264484]: 2025-11-29 08:09:51.800745855 +0000 UTC m=+0.124296487 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:09:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.752 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.752 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.753 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.753 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.754 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.755 233728 INFO nova.compute.manager [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Terminating instance#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.760 233728 DEBUG nova.compute.manager [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:52 np0005539552 kernel: tap2c5188d4-f3 (unregistering): left promiscuous mode
Nov 29 03:09:52 np0005539552 NetworkManager[48926]: <info>  [1764403792.8001] device (tap2c5188d4-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:52Z|00230|binding|INFO|Releasing lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 from this chassis (sb_readonly=0)
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.808 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:52Z|00231|binding|INFO|Setting lport 2c5188d4-f3c0-4374-9952-360d1ab07a47 down in Southbound
Nov 29 03:09:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:09:52Z|00232|binding|INFO|Removing iface tap2c5188d4-f3 ovn-installed in OVS
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.811 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:52.815 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:6d:ff 10.100.0.10'], port_security=['fa:16:3e:b6:6d:ff 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9a195147-a644-4352-b5a5-4e81a4ee7d4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2c5188d4-f3c0-4374-9952-360d1ab07a47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:52.816 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2c5188d4-f3c0-4374-9952-360d1ab07a47 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:09:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:52.817 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:52.818 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c407abd8-9166-415f-93c1-54829c97a193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:52.819 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:09:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:52.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539552 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 29 03:09:52 np0005539552 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000004b.scope: Consumed 5.302s CPU time.
Nov 29 03:09:52 np0005539552 systemd-machined[196379]: Machine qemu-27-instance-0000004b terminated.
Nov 29 03:09:52 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [NOTICE]   (264469) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:52 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [NOTICE]   (264469) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:52 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [WARNING]  (264469) : Exiting Master process...
Nov 29 03:09:52 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [ALERT]    (264469) : Current worker (264471) exited with code 143 (Terminated)
Nov 29 03:09:52 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[264465]: [WARNING]  (264469) : All workers exited. Exiting... (0)
Nov 29 03:09:52 np0005539552 systemd[1]: libpod-fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc.scope: Deactivated successfully.
Nov 29 03:09:52 np0005539552 podman[264570]: 2025-11-29 08:09:52.948790725 +0000 UTC m=+0.045394027 container died fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:09:52 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539552 systemd[1]: var-lib-containers-storage-overlay-1c6fbdac96e178a378fce6d631885baa2c722f655ee9249edd61dd8e68355631-merged.mount: Deactivated successfully.
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539552 podman[264570]: 2025-11-29 08:09:52.990190333 +0000 UTC m=+0.086793615 container cleanup fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.992 233728 INFO nova.virt.libvirt.driver [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Instance destroyed successfully.#033[00m
Nov 29 03:09:52 np0005539552 nova_compute[233724]: 2025-11-29 08:09:52.993 233728 DEBUG nova.objects.instance [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid 9a195147-a644-4352-b5a5-4e81a4ee7d4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:52 np0005539552 systemd[1]: libpod-conmon-fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc.scope: Deactivated successfully.
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.020 233728 DEBUG nova.virt.libvirt.vif [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-688009273',display_name='tempest-ServerDiskConfigTestJSON-server-688009273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-688009273',id=75,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-xcthga44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:48Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=9a195147-a644-4352-b5a5-4e81a4ee7d4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.021 233728 DEBUG nova.network.os_vif_util [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "address": "fa:16:3e:b6:6d:ff", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c5188d4-f3", "ovs_interfaceid": "2c5188d4-f3c0-4374-9952-360d1ab07a47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.022 233728 DEBUG nova.network.os_vif_util [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.023 233728 DEBUG os_vif [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.024 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.025 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c5188d4-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.028 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.030 233728 INFO os_vif [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:6d:ff,bridge_name='br-int',has_traffic_filtering=True,id=2c5188d4-f3c0-4374-9952-360d1ab07a47,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c5188d4-f3')#033[00m
Nov 29 03:09:53 np0005539552 podman[264631]: 2025-11-29 08:09:53.062776572 +0000 UTC m=+0.051128921 container remove fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.068 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2fab7318-f47a-4055-9879-677b5de735ad]: (4, ('Sat Nov 29 08:09:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc)\nfbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc\nSat Nov 29 08:09:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (fbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc)\nfbbce2e717bd7f78134e15b21a24a2fa5fc8d98285c50c6855a635bec7ba11fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.070 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7c2e46-5bca-4638-ba1b-119d9bb86ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.071 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:53 np0005539552 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.087 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.089 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e4440f50-0576-43c2-95cd-3a52e98fd53b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.101 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b825640a-9e18-4706-acfe-3d7c4148319f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.102 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[519c1ea8-9a95-4d66-bbd2-07eeddd42e42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.118 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[70b633a1-043c-407a-be85-282846e40735]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683253, 'reachable_time': 15967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264688, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.120 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:09:53.120 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[8613bb50-18e2-4180-8fc8-ca9df8e71cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:53 np0005539552 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:09:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:53.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.420 233728 INFO nova.virt.libvirt.driver [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deleting instance files /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c_del#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.422 233728 INFO nova.virt.libvirt.driver [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deletion of /var/lib/nova/instances/9a195147-a644-4352-b5a5-4e81a4ee7d4c_del complete#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.520 233728 INFO nova.compute.manager [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.521 233728 DEBUG oslo.service.loopingcall [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.521 233728 DEBUG nova.compute.manager [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:53 np0005539552 nova_compute[233724]: 2025-11-29 08:09:53.521 233728 DEBUG nova.network.neutron [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:54.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.015 233728 DEBUG nova.compute.manager [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-unplugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.015 233728 DEBUG oslo_concurrency.lockutils [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.016 233728 DEBUG oslo_concurrency.lockutils [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.016 233728 DEBUG oslo_concurrency.lockutils [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.017 233728 DEBUG nova.compute.manager [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] No waiting events found dispatching network-vif-unplugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.017 233728 DEBUG nova.compute.manager [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-unplugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.018 233728 DEBUG nova.compute.manager [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.018 233728 DEBUG oslo_concurrency.lockutils [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.019 233728 DEBUG oslo_concurrency.lockutils [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.019 233728 DEBUG oslo_concurrency.lockutils [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.020 233728 DEBUG nova.compute.manager [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] No waiting events found dispatching network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.021 233728 WARNING nova.compute.manager [req-b3e93b0c-bb33-4045-b9d5-ba2375f7640d req-4344a174-e8ee-4aa2-9b94-026d621c6405 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received unexpected event network-vif-plugged-2c5188d4-f3c0-4374-9952-360d1ab07a47 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:55.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.293 233728 DEBUG nova.network.neutron [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.593 233728 INFO nova.compute.manager [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Took 2.07 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.604 233728 DEBUG nova.compute.manager [req-a7b02b78-18da-44ec-9502-5569d6c877ce req-b25dcba4-bbbe-4cd7-b937-da622f671ed7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Received event network-vif-deleted-2c5188d4-f3c0-4374-9952-360d1ab07a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.638 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.639 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.639 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539552 nova_compute[233724]: 2025-11-29 08:09:55.690 233728 DEBUG oslo_concurrency.processutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2603517471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:56 np0005539552 nova_compute[233724]: 2025-11-29 08:09:56.123 233728 DEBUG oslo_concurrency.processutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:56 np0005539552 nova_compute[233724]: 2025-11-29 08:09:56.129 233728 DEBUG nova.compute.provider_tree [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:56 np0005539552 nova_compute[233724]: 2025-11-29 08:09:56.144 233728 DEBUG nova.scheduler.client.report [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:56 np0005539552 nova_compute[233724]: 2025-11-29 08:09:56.165 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:56 np0005539552 nova_compute[233724]: 2025-11-29 08:09:56.193 233728 INFO nova.scheduler.client.report [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocations for instance 9a195147-a644-4352-b5a5-4e81a4ee7d4c#033[00m
Nov 29 03:09:56 np0005539552 nova_compute[233724]: 2025-11-29 08:09:56.269 233728 DEBUG oslo_concurrency.lockutils [None req-0d8bd6e5-10ba-4ba8-8c2f-41d06f0ad001 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "9a195147-a644-4352-b5a5-4e81a4ee7d4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:56.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:57.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:58 np0005539552 nova_compute[233724]: 2025-11-29 08:09:58.029 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:09:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 03:10:00 np0005539552 nova_compute[233724]: 2025-11-29 08:10:00.642 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:00.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:02.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:03 np0005539552 nova_compute[233724]: 2025-11-29 08:10:03.065 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:05 np0005539552 nova_compute[233724]: 2025-11-29 08:10:05.646 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:06.942 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539552 nova_compute[233724]: 2025-11-29 08:10:06.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:06.943 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:07 np0005539552 nova_compute[233724]: 2025-11-29 08:10:07.990 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403792.9875581, 9a195147-a644-4352-b5a5-4e81a4ee7d4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:07 np0005539552 nova_compute[233724]: 2025-11-29 08:10:07.990 233728 INFO nova.compute.manager [-] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:10:08 np0005539552 nova_compute[233724]: 2025-11-29 08:10:08.017 233728 DEBUG nova.compute.manager [None req-a072b512-6d62-4790-b61a-7e07025e194e - - - - - -] [instance: 9a195147-a644-4352-b5a5-4e81a4ee7d4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:08 np0005539552 nova_compute[233724]: 2025-11-29 08:10:08.070 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:08.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:08.946 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:10 np0005539552 nova_compute[233724]: 2025-11-29 08:10:10.648 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:13 np0005539552 nova_compute[233724]: 2025-11-29 08:10:13.071 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.164 233728 DEBUG nova.compute.manager [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.412 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.413 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.431 233728 DEBUG nova.objects.instance [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_requests' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.445 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.445 233728 INFO nova.compute.claims [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.446 233728 DEBUG nova.objects.instance [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.456 233728 DEBUG nova.objects.instance [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.494 233728 INFO nova.compute.resource_tracker [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating resource usage from migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.494 233728 DEBUG nova.compute.resource_tracker [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Starting to track incoming migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:10:14 np0005539552 nova_compute[233724]: 2025-11-29 08:10:14.614 233728 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:10:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:14.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:10:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1174972165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:15 np0005539552 nova_compute[233724]: 2025-11-29 08:10:15.036 233728 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:15 np0005539552 nova_compute[233724]: 2025-11-29 08:10:15.044 233728 DEBUG nova.compute.provider_tree [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:15 np0005539552 nova_compute[233724]: 2025-11-29 08:10:15.066 233728 DEBUG nova.scheduler.client.report [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:15 np0005539552 nova_compute[233724]: 2025-11-29 08:10:15.112 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539552 nova_compute[233724]: 2025-11-29 08:10:15.113 233728 INFO nova.compute.manager [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Migrating#033[00m
Nov 29 03:10:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:15 np0005539552 nova_compute[233724]: 2025-11-29 08:10:15.651 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:17 np0005539552 systemd-logind[788]: New session 59 of user nova.
Nov 29 03:10:17 np0005539552 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:10:17 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:10:17 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:10:17 np0005539552 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:10:17 np0005539552 systemd[264805]: Queued start job for default target Main User Target.
Nov 29 03:10:17 np0005539552 systemd[264805]: Created slice User Application Slice.
Nov 29 03:10:17 np0005539552 systemd[264805]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:10:17 np0005539552 systemd[264805]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:10:17 np0005539552 systemd[264805]: Reached target Paths.
Nov 29 03:10:17 np0005539552 systemd[264805]: Reached target Timers.
Nov 29 03:10:17 np0005539552 systemd[264805]: Starting D-Bus User Message Bus Socket...
Nov 29 03:10:17 np0005539552 systemd[264805]: Starting Create User's Volatile Files and Directories...
Nov 29 03:10:17 np0005539552 systemd[264805]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:10:17 np0005539552 systemd[264805]: Reached target Sockets.
Nov 29 03:10:17 np0005539552 systemd[264805]: Finished Create User's Volatile Files and Directories.
Nov 29 03:10:17 np0005539552 systemd[264805]: Reached target Basic System.
Nov 29 03:10:17 np0005539552 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:10:17 np0005539552 systemd[264805]: Reached target Main User Target.
Nov 29 03:10:17 np0005539552 systemd[264805]: Startup finished in 119ms.
Nov 29 03:10:17 np0005539552 systemd[1]: Started Session 59 of User nova.
Nov 29 03:10:17 np0005539552 systemd[1]: session-59.scope: Deactivated successfully.
Nov 29 03:10:17 np0005539552 systemd-logind[788]: Session 59 logged out. Waiting for processes to exit.
Nov 29 03:10:17 np0005539552 systemd-logind[788]: Removed session 59.
Nov 29 03:10:17 np0005539552 systemd-logind[788]: New session 61 of user nova.
Nov 29 03:10:17 np0005539552 systemd[1]: Started Session 61 of User nova.
Nov 29 03:10:17 np0005539552 systemd[1]: session-61.scope: Deactivated successfully.
Nov 29 03:10:17 np0005539552 systemd-logind[788]: Session 61 logged out. Waiting for processes to exit.
Nov 29 03:10:17 np0005539552 systemd-logind[788]: Removed session 61.
Nov 29 03:10:18 np0005539552 nova_compute[233724]: 2025-11-29 08:10:18.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:18.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.034 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.034 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.048 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.152 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.152 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.158 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.159 233728 INFO nova.compute.claims [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.276 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1428153136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.702 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.708 233728 DEBUG nova.compute.provider_tree [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.724 233728 DEBUG nova.scheduler.client.report [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.745 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.746 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.784 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.785 233728 DEBUG nova.network.neutron [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.802 233728 INFO nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.819 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.917 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.919 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.919 233728 INFO nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Creating image(s)#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.947 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.973 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:19 np0005539552 nova_compute[233724]: 2025-11-29 08:10:19.997 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.001 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.061 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.063 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.064 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.064 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.096 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.100 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6cfc4165-90ce-407e-8236-34f2147deb51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.392 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6cfc4165-90ce-407e-8236-34f2147deb51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.449 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] resizing rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.540 233728 DEBUG nova.objects.instance [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lazy-loading 'migration_context' on Instance uuid 6cfc4165-90ce-407e-8236-34f2147deb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.553 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.553 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Ensure instance console log exists: /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.554 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.554 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.554 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:20.618 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:20.619 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:20.619 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.660 233728 DEBUG nova.policy [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa730ff43688414fbafb9fb85e566a1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '01e87a17aae64e93bdb507d58a515a3f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:10:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:20.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.972 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.973 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.974 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.974 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:10:20 np0005539552 nova_compute[233724]: 2025-11-29 08:10:20.975 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/590581273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.462 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.638 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.640 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4557MB free_disk=20.967201232910156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.641 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.641 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.697 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration for instance b373b176-ee91-41a8-a80a-96c957639455 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.730 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating resource usage from migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.730 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Starting to track incoming migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.784 233728 WARNING nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance b373b176-ee91-41a8-a80a-96c957639455 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.784 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 6cfc4165-90ce-407e-8236-34f2147deb51 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.784 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.785 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:10:21 np0005539552 nova_compute[233724]: 2025-11-29 08:10:21.841 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:21 np0005539552 podman[265115]: 2025-11-29 08:10:21.85836041 +0000 UTC m=+0.060072613 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:10:21 np0005539552 podman[265114]: 2025-11-29 08:10:21.885728669 +0000 UTC m=+0.087437861 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 03:10:21 np0005539552 podman[265179]: 2025-11-29 08:10:21.961562866 +0000 UTC m=+0.071690026 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:10:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3352103795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:22 np0005539552 nova_compute[233724]: 2025-11-29 08:10:22.288 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:22 np0005539552 nova_compute[233724]: 2025-11-29 08:10:22.294 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:22 np0005539552 nova_compute[233724]: 2025-11-29 08:10:22.309 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:22 np0005539552 nova_compute[233724]: 2025-11-29 08:10:22.346 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:10:22 np0005539552 nova_compute[233724]: 2025-11-29 08:10:22.346 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:22.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:23 np0005539552 nova_compute[233724]: 2025-11-29 08:10:23.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:23 np0005539552 nova_compute[233724]: 2025-11-29 08:10:23.300 233728 DEBUG nova.network.neutron [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Successfully created port: 316eb9f1-38ba-4e49-8475-61d32e3ecc4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:10:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:23.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:23 np0005539552 nova_compute[233724]: 2025-11-29 08:10:23.325 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:23 np0005539552 nova_compute[233724]: 2025-11-29 08:10:23.326 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:23 np0005539552 nova_compute[233724]: 2025-11-29 08:10:23.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:23 np0005539552 nova_compute[233724]: 2025-11-29 08:10:23.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.218 233728 DEBUG nova.network.neutron [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Successfully updated port: 316eb9f1-38ba-4e49-8475-61d32e3ecc4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.231 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.232 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquired lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.232 233728 DEBUG nova.network.neutron [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.308 233728 DEBUG nova.compute.manager [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-changed-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.309 233728 DEBUG nova.compute.manager [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Refreshing instance network info cache due to event network-changed-316eb9f1-38ba-4e49-8475-61d32e3ecc4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.309 233728 DEBUG oslo_concurrency.lockutils [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.402 233728 DEBUG nova.network.neutron [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:10:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:24.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:24 np0005539552 nova_compute[233724]: 2025-11-29 08:10:24.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.214 233728 DEBUG nova.network.neutron [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Updating instance_info_cache with network_info: [{"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.234 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Releasing lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.235 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Instance network_info: |[{"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.235 233728 DEBUG oslo_concurrency.lockutils [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.236 233728 DEBUG nova.network.neutron [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Refreshing network info cache for port 316eb9f1-38ba-4e49-8475-61d32e3ecc4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.240 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Start _get_guest_xml network_info=[{"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.246 233728 WARNING nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.250 233728 DEBUG nova.virt.libvirt.host [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.251 233728 DEBUG nova.virt.libvirt.host [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.253 233728 DEBUG nova.virt.libvirt.host [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.254 233728 DEBUG nova.virt.libvirt.host [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.255 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.256 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.257 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.257 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.257 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.258 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.258 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.259 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.259 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.259 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.260 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.260 233728 DEBUG nova.virt.hardware [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.264 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.686 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3809213942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.742 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.780 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.786 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:25 np0005539552 nova_compute[233724]: 2025-11-29 08:10:25.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2383194979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.185 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.186 233728 DEBUG nova.virt.libvirt.vif [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-937614641',display_name='tempest-ServersTestJSON-server-937614641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-937614641',id=79,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEC6coBnEzsPsu9uqhkRNWbiAFZrvi4jrlwmuO5/v51K6GFLCyNMqw3JNIcILFAQrHu57Gc0oW4eaFRdy+GSMK/n2f2ovUczJzyoIu/uvpkXVKRLoe5GTeAIsJfd85D8/w==',key_name='tempest-keypair-2052461960',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='01e87a17aae64e93bdb507d58a515a3f',ramdisk_id='',reservation_id='r-gav6qgb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-784160324',owner_user_name='tempest-ServersTestJSON-784160324-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa730ff43688414fbafb9fb85e566a1a',uuid=6cfc4165-90ce-407e-8236-34f2147deb51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.187 233728 DEBUG nova.network.os_vif_util [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Converting VIF {"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.188 233728 DEBUG nova.network.os_vif_util [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.189 233728 DEBUG nova.objects.instance [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6cfc4165-90ce-407e-8236-34f2147deb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.214 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <uuid>6cfc4165-90ce-407e-8236-34f2147deb51</uuid>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <name>instance-0000004f</name>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersTestJSON-server-937614641</nova:name>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:10:25</nova:creationTime>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:user uuid="aa730ff43688414fbafb9fb85e566a1a">tempest-ServersTestJSON-784160324-project-member</nova:user>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:project uuid="01e87a17aae64e93bdb507d58a515a3f">tempest-ServersTestJSON-784160324</nova:project>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <nova:port uuid="316eb9f1-38ba-4e49-8475-61d32e3ecc4f">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <entry name="serial">6cfc4165-90ce-407e-8236-34f2147deb51</entry>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <entry name="uuid">6cfc4165-90ce-407e-8236-34f2147deb51</entry>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/6cfc4165-90ce-407e-8236-34f2147deb51_disk">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/6cfc4165-90ce-407e-8236-34f2147deb51_disk.config">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:8a:9b:b1"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <target dev="tap316eb9f1-38"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/console.log" append="off"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:10:26 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:10:26 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:10:26 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:10:26 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.215 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Preparing to wait for external event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.216 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.216 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.216 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.217 233728 DEBUG nova.virt.libvirt.vif [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-937614641',display_name='tempest-ServersTestJSON-server-937614641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-937614641',id=79,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEC6coBnEzsPsu9uqhkRNWbiAFZrvi4jrlwmuO5/v51K6GFLCyNMqw3JNIcILFAQrHu57Gc0oW4eaFRdy+GSMK/n2f2ovUczJzyoIu/uvpkXVKRLoe5GTeAIsJfd85D8/w==',key_name='tempest-keypair-2052461960',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='01e87a17aae64e93bdb507d58a515a3f',ramdisk_id='',reservation_id='r-gav6qgb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-784160324',owner_user_name='tempest-ServersTestJSON-784160324-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa730ff43688414fbafb9fb85e566a1a',uuid=6cfc4165-90ce-407e-8236-34f2147deb51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.217 233728 DEBUG nova.network.os_vif_util [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Converting VIF {"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.218 233728 DEBUG nova.network.os_vif_util [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.219 233728 DEBUG os_vif [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.219 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.220 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.220 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.226 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.226 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap316eb9f1-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.227 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap316eb9f1-38, col_values=(('external_ids', {'iface-id': '316eb9f1-38ba-4e49-8475-61d32e3ecc4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:9b:b1', 'vm-uuid': '6cfc4165-90ce-407e-8236-34f2147deb51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.228 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:26 np0005539552 NetworkManager[48926]: <info>  [1764403826.2298] manager: (tap316eb9f1-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.230 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.235 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.235 233728 INFO os_vif [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38')#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.295 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.295 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.296 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] No VIF found with MAC fa:16:3e:8a:9b:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.296 233728 INFO nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Using config drive#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.322 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.636 233728 INFO nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Creating config drive at /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/disk.config#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.649 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqs5n__e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.787 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqs5n__e" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.820 233728 DEBUG nova.storage.rbd_utils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] rbd image 6cfc4165-90ce-407e-8236-34f2147deb51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.825 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/disk.config 6cfc4165-90ce-407e-8236-34f2147deb51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:26.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:26 np0005539552 nova_compute[233724]: 2025-11-29 08:10:26.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.000 233728 DEBUG oslo_concurrency.processutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/disk.config 6cfc4165-90ce-407e-8236-34f2147deb51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.002 233728 INFO nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Deleting local config drive /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51/disk.config because it was imported into RBD.#033[00m
Nov 29 03:10:27 np0005539552 kernel: tap316eb9f1-38: entered promiscuous mode
Nov 29 03:10:27 np0005539552 NetworkManager[48926]: <info>  [1764403827.0678] manager: (tap316eb9f1-38): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 03:10:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:27Z|00233|binding|INFO|Claiming lport 316eb9f1-38ba-4e49-8475-61d32e3ecc4f for this chassis.
Nov 29 03:10:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:27Z|00234|binding|INFO|316eb9f1-38ba-4e49-8475-61d32e3ecc4f: Claiming fa:16:3e:8a:9b:b1 10.100.0.11
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.069 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.074 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.080 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:9b:b1 10.100.0.11'], port_security=['fa:16:3e:8a:9b:b1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6cfc4165-90ce-407e-8236-34f2147deb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63a8b230-0938-49d5-bd27-d250421cd1a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '01e87a17aae64e93bdb507d58a515a3f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d51e162-49a6-428f-b147-b19102c65732', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c940b2e-658f-4996-b3f8-38625da2db31, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=316eb9f1-38ba-4e49-8475-61d32e3ecc4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.080 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 316eb9f1-38ba-4e49-8475-61d32e3ecc4f in datapath 63a8b230-0938-49d5-bd27-d250421cd1a2 bound to our chassis#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.082 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 63a8b230-0938-49d5-bd27-d250421cd1a2#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.092 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[10affa26-82ef-4570-9fc8-5c302a6581c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.093 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap63a8b230-01 in ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.096 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap63a8b230-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.096 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc75bc0-56eb-4354-aebd-75b42ddc088b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.097 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[95c3b510-47a3-4743-a147-8cf21fa7c2e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 systemd-udevd[265400]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:27 np0005539552 systemd-machined[196379]: New machine qemu-28-instance-0000004f.
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.109 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf56df3-31ba-4caa-a111-430f4a67feb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 NetworkManager[48926]: <info>  [1764403827.1197] device (tap316eb9f1-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:27 np0005539552 NetworkManager[48926]: <info>  [1764403827.1207] device (tap316eb9f1-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:27 np0005539552 systemd[1]: Started Virtual Machine qemu-28-instance-0000004f.
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.137 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[24378c97-5561-4d65-9446-4ed8ad53c516]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:27Z|00235|binding|INFO|Setting lport 316eb9f1-38ba-4e49-8475-61d32e3ecc4f ovn-installed in OVS
Nov 29 03:10:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:27Z|00236|binding|INFO|Setting lport 316eb9f1-38ba-4e49-8475-61d32e3ecc4f up in Southbound
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.169 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a9311535-cb6d-43df-a038-406ca1a5c85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 NetworkManager[48926]: <info>  [1764403827.1756] manager: (tap63a8b230-00): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.174 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[36a950b7-ed15-423d-967f-663e8316ba68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.203 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[cffd10c4-31dd-49ee-9c57-5dac7408f9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.206 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1cfbab-8baa-4955-93f2-1a01823a52f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 NetworkManager[48926]: <info>  [1764403827.2319] device (tap63a8b230-00): carrier: link connected
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.241 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0ef66d-a8fd-4c26-af59-853b01c9df83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.261 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff699fb-9bda-4f03-bfa5-6d856a967c8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap63a8b230-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c6:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687291, 'reachable_time': 21151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265431, 'error': None, 'target': 'ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.283 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c3282809-73df-4d67-adcc-fbd2881970de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:c65b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687291, 'tstamp': 687291}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265432, 'error': None, 'target': 'ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.313 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d83949f1-7053-4a4e-9353-a36bbdbd9688]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap63a8b230-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c6:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687291, 'reachable_time': 21151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265433, 'error': None, 'target': 'ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:27.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.353 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[812aa4a1-c898-422e-a71a-fd43663029ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.414 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5bacb241-bd30-4c0c-953e-2b65c6e030e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.415 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a8b230-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.416 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.416 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63a8b230-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:27 np0005539552 NetworkManager[48926]: <info>  [1764403827.4187] manager: (tap63a8b230-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 03:10:27 np0005539552 kernel: tap63a8b230-00: entered promiscuous mode
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.421 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap63a8b230-00, col_values=(('external_ids', {'iface-id': 'fedfd143-4ffa-46b5-88de-55a0dfb699cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:27Z|00237|binding|INFO|Releasing lport fedfd143-4ffa-46b5-88de-55a0dfb699cf from this chassis (sb_readonly=0)
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.425 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/63a8b230-0938-49d5-bd27-d250421cd1a2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/63a8b230-0938-49d5-bd27-d250421cd1a2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.427 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5a183e-c5d5-4e00-a268-e61869703fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.428 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-63a8b230-0938-49d5-bd27-d250421cd1a2
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/63a8b230-0938-49d5-bd27-d250421cd1a2.pid.haproxy
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 63a8b230-0938-49d5-bd27-d250421cd1a2
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:27.429 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2', 'env', 'PROCESS_TAG=haproxy-63a8b230-0938-49d5-bd27-d250421cd1a2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/63a8b230-0938-49d5-bd27-d250421cd1a2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.436 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.647 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403827.6469162, 6cfc4165-90ce-407e-8236-34f2147deb51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.648 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.671 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.677 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403827.6470916, 6cfc4165-90ce-407e-8236-34f2147deb51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.677 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.702 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.707 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.729 233728 DEBUG nova.network.neutron [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Updated VIF entry in instance network info cache for port 316eb9f1-38ba-4e49-8475-61d32e3ecc4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.730 233728 DEBUG nova.network.neutron [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Updating instance_info_cache with network_info: [{"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.743 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.755 233728 DEBUG oslo_concurrency.lockutils [req-78bdb5af-da1f-46e7-affc-8b06dcd612de req-c522f4df-e408-445d-a2ef-2289afa22f9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:27 np0005539552 podman[265507]: 2025-11-29 08:10:27.778803075 +0000 UTC m=+0.025682794 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:27 np0005539552 podman[265507]: 2025-11-29 08:10:27.873607184 +0000 UTC m=+0.120486873 container create 2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.897 233728 DEBUG nova.compute.manager [req-9accbe09-7678-44b5-a42e-96e55de2b420 req-5892eb60-574e-43eb-8a1f-75917f4a04e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.898 233728 DEBUG oslo_concurrency.lockutils [req-9accbe09-7678-44b5-a42e-96e55de2b420 req-5892eb60-574e-43eb-8a1f-75917f4a04e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.898 233728 DEBUG oslo_concurrency.lockutils [req-9accbe09-7678-44b5-a42e-96e55de2b420 req-5892eb60-574e-43eb-8a1f-75917f4a04e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.898 233728 DEBUG oslo_concurrency.lockutils [req-9accbe09-7678-44b5-a42e-96e55de2b420 req-5892eb60-574e-43eb-8a1f-75917f4a04e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.899 233728 DEBUG nova.compute.manager [req-9accbe09-7678-44b5-a42e-96e55de2b420 req-5892eb60-574e-43eb-8a1f-75917f4a04e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Processing event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.900 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.903 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403827.9029264, 6cfc4165-90ce-407e-8236-34f2147deb51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.903 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.905 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.908 233728 INFO nova.virt.libvirt.driver [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Instance spawned successfully.#033[00m
Nov 29 03:10:27 np0005539552 systemd[1]: Started libpod-conmon-2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e.scope.
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.909 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.922 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.928 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.933 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.934 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:27 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.934 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.935 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.935 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.936 233728 DEBUG nova.virt.libvirt.driver [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:27 np0005539552 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:10:27 np0005539552 systemd[264805]: Activating special unit Exit the Session...
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped target Main User Target.
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped target Basic System.
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped target Paths.
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped target Sockets.
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped target Timers.
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.941 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:10:27 np0005539552 systemd[264805]: Closed D-Bus User Message Bus Socket.
Nov 29 03:10:27 np0005539552 systemd[264805]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:27 np0005539552 systemd[264805]: Removed slice User Application Slice.
Nov 29 03:10:27 np0005539552 systemd[264805]: Reached target Shutdown.
Nov 29 03:10:27 np0005539552 systemd[264805]: Finished Exit the Session.
Nov 29 03:10:27 np0005539552 systemd[264805]: Reached target Exit the Session.
Nov 29 03:10:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1cdb1331fc6521a1fafa3ad8b5bcbf5a953a85e69c0d1433966d174abe4bcda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:27 np0005539552 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:10:27 np0005539552 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:10:27 np0005539552 podman[265507]: 2025-11-29 08:10:27.957572551 +0000 UTC m=+0.204452240 container init 2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:10:27 np0005539552 nova_compute[233724]: 2025-11-29 08:10:27.962 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:27 np0005539552 podman[265507]: 2025-11-29 08:10:27.963016898 +0000 UTC m=+0.209896587 container start 2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:10:27 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:10:27 np0005539552 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:10:27 np0005539552 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:10:27 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:10:27 np0005539552 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:10:27 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [NOTICE]   (265528) : New worker (265530) forked
Nov 29 03:10:27 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [NOTICE]   (265528) : Loading success.
Nov 29 03:10:28 np0005539552 nova_compute[233724]: 2025-11-29 08:10:28.017 233728 INFO nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Took 8.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:10:28 np0005539552 nova_compute[233724]: 2025-11-29 08:10:28.017 233728 DEBUG nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:28 np0005539552 nova_compute[233724]: 2025-11-29 08:10:28.078 233728 INFO nova.compute.manager [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Took 8.98 seconds to build instance.#033[00m
Nov 29 03:10:28 np0005539552 nova_compute[233724]: 2025-11-29 08:10:28.095 233728 DEBUG oslo_concurrency.lockutils [None req-43107335-ab52-4660-a655-6da1e6ea874b aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:28.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:29 np0005539552 nova_compute[233724]: 2025-11-29 08:10:29.984 233728 DEBUG nova.compute.manager [req-5a82bf86-3588-4a8f-a5b2-fb2a3a418878 req-428843ae-6c4c-47f0-acce-657b32799317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:29 np0005539552 nova_compute[233724]: 2025-11-29 08:10:29.985 233728 DEBUG oslo_concurrency.lockutils [req-5a82bf86-3588-4a8f-a5b2-fb2a3a418878 req-428843ae-6c4c-47f0-acce-657b32799317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:29 np0005539552 nova_compute[233724]: 2025-11-29 08:10:29.985 233728 DEBUG oslo_concurrency.lockutils [req-5a82bf86-3588-4a8f-a5b2-fb2a3a418878 req-428843ae-6c4c-47f0-acce-657b32799317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:29 np0005539552 nova_compute[233724]: 2025-11-29 08:10:29.986 233728 DEBUG oslo_concurrency.lockutils [req-5a82bf86-3588-4a8f-a5b2-fb2a3a418878 req-428843ae-6c4c-47f0-acce-657b32799317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:29 np0005539552 nova_compute[233724]: 2025-11-29 08:10:29.986 233728 DEBUG nova.compute.manager [req-5a82bf86-3588-4a8f-a5b2-fb2a3a418878 req-428843ae-6c4c-47f0-acce-657b32799317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] No waiting events found dispatching network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:29 np0005539552 nova_compute[233724]: 2025-11-29 08:10:29.986 233728 WARNING nova.compute.manager [req-5a82bf86-3588-4a8f-a5b2-fb2a3a418878 req-428843ae-6c4c-47f0-acce-657b32799317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received unexpected event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.687 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.848 233728 DEBUG nova.compute.manager [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.849 233728 DEBUG oslo_concurrency.lockutils [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.849 233728 DEBUG oslo_concurrency.lockutils [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.849 233728 DEBUG oslo_concurrency.lockutils [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.850 233728 DEBUG nova.compute.manager [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:30 np0005539552 nova_compute[233724]: 2025-11-29 08:10:30.850 233728 WARNING nova.compute.manager [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:10:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:30.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:31 np0005539552 nova_compute[233724]: 2025-11-29 08:10:31.230 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:31 np0005539552 NetworkManager[48926]: <info>  [1764403831.7403] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 03:10:31 np0005539552 NetworkManager[48926]: <info>  [1764403831.7412] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 03:10:31 np0005539552 nova_compute[233724]: 2025-11-29 08:10:31.742 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539552 nova_compute[233724]: 2025-11-29 08:10:31.855 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:31Z|00238|binding|INFO|Releasing lport fedfd143-4ffa-46b5-88de-55a0dfb699cf from this chassis (sb_readonly=0)
Nov 29 03:10:31 np0005539552 nova_compute[233724]: 2025-11-29 08:10:31.885 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539552 nova_compute[233724]: 2025-11-29 08:10:31.971 233728 INFO nova.network.neutron [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:10:32 np0005539552 nova_compute[233724]: 2025-11-29 08:10:32.050 233728 DEBUG nova.compute.manager [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-changed-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:32 np0005539552 nova_compute[233724]: 2025-11-29 08:10:32.051 233728 DEBUG nova.compute.manager [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Refreshing instance network info cache due to event network-changed-316eb9f1-38ba-4e49-8475-61d32e3ecc4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:32 np0005539552 nova_compute[233724]: 2025-11-29 08:10:32.051 233728 DEBUG oslo_concurrency.lockutils [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:32 np0005539552 nova_compute[233724]: 2025-11-29 08:10:32.051 233728 DEBUG oslo_concurrency.lockutils [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:32 np0005539552 nova_compute[233724]: 2025-11-29 08:10:32.052 233728 DEBUG nova.network.neutron [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Refreshing network info cache for port 316eb9f1-38ba-4e49-8475-61d32e3ecc4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:32 np0005539552 nova_compute[233724]: 2025-11-29 08:10:32.935 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.022 233728 DEBUG nova.compute.manager [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.023 233728 DEBUG oslo_concurrency.lockutils [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.024 233728 DEBUG oslo_concurrency.lockutils [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.024 233728 DEBUG oslo_concurrency.lockutils [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.024 233728 DEBUG nova.compute.manager [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.024 233728 WARNING nova.compute.manager [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:10:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:33.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.794 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.795 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.796 233728 DEBUG nova.network.neutron [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.866 233728 DEBUG nova.compute.manager [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-changed-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.867 233728 DEBUG nova.compute.manager [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Refreshing instance network info cache due to event network-changed-6bdd57b3-15f4-46ef-bab3-67925c3606c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:33 np0005539552 nova_compute[233724]: 2025-11-29 08:10:33.867 233728 DEBUG oslo_concurrency.lockutils [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:34 np0005539552 nova_compute[233724]: 2025-11-29 08:10:34.022 233728 DEBUG nova.network.neutron [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Updated VIF entry in instance network info cache for port 316eb9f1-38ba-4e49-8475-61d32e3ecc4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:34 np0005539552 nova_compute[233724]: 2025-11-29 08:10:34.023 233728 DEBUG nova.network.neutron [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Updating instance_info_cache with network_info: [{"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:34 np0005539552 nova_compute[233724]: 2025-11-29 08:10:34.061 233728 DEBUG oslo_concurrency.lockutils [req-449c8233-7ba0-46fb-83c2-656cd12f1feb req-46f343f9-fbb7-4906-b5d4-f388407f890e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6cfc4165-90ce-407e-8236-34f2147deb51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:34.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.246 233728 DEBUG nova.network.neutron [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.265 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.271 233728 DEBUG oslo_concurrency.lockutils [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.272 233728 DEBUG nova.network.neutron [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Refreshing network info cache for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.362 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.365 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.365 233728 INFO nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Creating image(s)#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.406 233728 DEBUG nova.storage.rbd_utils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] creating snapshot(nova-resize) on rbd image(b373b176-ee91-41a8-a80a-96c957639455_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.719 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.775 233728 DEBUG nova.objects.instance [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.875 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.876 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Ensure instance console log exists: /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.876 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.876 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.877 233728 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.879 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Start _get_guest_xml network_info=[{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.884 233728 WARNING nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.888 233728 DEBUG nova.virt.libvirt.host [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.889 233728 DEBUG nova.virt.libvirt.host [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.893 233728 DEBUG nova.virt.libvirt.host [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.893 233728 DEBUG nova.virt.libvirt.host [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.894 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.894 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.895 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.895 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.895 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.895 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.895 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.896 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.896 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.896 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.896 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.896 233728 DEBUG nova.virt.hardware [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.897 233728 DEBUG nova.objects.instance [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:35 np0005539552 nova_compute[233724]: 2025-11-29 08:10:35.911 233728 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.232 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/642500779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.353 233728 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.389 233728 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:36.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.884 233728 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.888 233728 DEBUG nova.virt.libvirt.vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:31Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.889 233728 DEBUG nova.network.os_vif_util [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.891 233728 DEBUG nova.network.os_vif_util [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.896 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <uuid>b373b176-ee91-41a8-a80a-96c957639455</uuid>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <name>instance-0000004e</name>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <memory>196608</memory>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1182352036</nova:name>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:10:35</nova:creationTime>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.micro">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:memory>192</nova:memory>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <nova:port uuid="6bdd57b3-15f4-46ef-bab3-67925c3606c5">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <entry name="serial">b373b176-ee91-41a8-a80a-96c957639455</entry>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <entry name="uuid">b373b176-ee91-41a8-a80a-96c957639455</entry>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b373b176-ee91-41a8-a80a-96c957639455_disk">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b373b176-ee91-41a8-a80a-96c957639455_disk.config">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:47:01:d2"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <target dev="tap6bdd57b3-15"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/console.log" append="off"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:10:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:10:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:10:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:10:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.899 233728 DEBUG nova.virt.libvirt.vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:31Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.900 233728 DEBUG nova.network.os_vif_util [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.901 233728 DEBUG nova.network.os_vif_util [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.902 233728 DEBUG os_vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.903 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.904 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.905 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.908 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.908 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bdd57b3-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.909 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bdd57b3-15, col_values=(('external_ids', {'iface-id': '6bdd57b3-15f4-46ef-bab3-67925c3606c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:01:d2', 'vm-uuid': 'b373b176-ee91-41a8-a80a-96c957639455'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.935 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539552 NetworkManager[48926]: <info>  [1764403836.9384] manager: (tap6bdd57b3-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.938 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.949 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539552 nova_compute[233724]: 2025-11-29 08:10:36.951 233728 INFO os_vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15')#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.003 233728 DEBUG nova.network.neutron [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updated VIF entry in instance network info cache for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.004 233728 DEBUG nova.network.neutron [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.017 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.017 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.017 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:47:01:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.018 233728 INFO nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Using config drive#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.053 233728 DEBUG oslo_concurrency.lockutils [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:37 np0005539552 NetworkManager[48926]: <info>  [1764403837.0943] manager: (tap6bdd57b3-15): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 03:10:37 np0005539552 kernel: tap6bdd57b3-15: entered promiscuous mode
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.099 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:37Z|00239|binding|INFO|Claiming lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 for this chassis.
Nov 29 03:10:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:37Z|00240|binding|INFO|6bdd57b3-15f4-46ef-bab3-67925c3606c5: Claiming fa:16:3e:47:01:d2 10.100.0.4
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.107 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:01:d2 10.100.0.4'], port_security=['fa:16:3e:47:01:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b373b176-ee91-41a8-a80a-96c957639455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=6bdd57b3-15f4-46ef-bab3-67925c3606c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.109 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.110 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:10:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:37Z|00241|binding|INFO|Setting lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 ovn-installed in OVS
Nov 29 03:10:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:37Z|00242|binding|INFO|Setting lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 up in Southbound
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.123 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.122 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[12ac7d4e-7dda-4674-a327-815e45722b5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.125 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.127 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.127 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79088cfa-b514-46fd-856b-872f64b33cf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.128 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.129 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[87546f6b-f21b-4e6d-86d1-c277c061198e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 systemd-machined[196379]: New machine qemu-29-instance-0000004e.
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.141 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[460c487e-74be-426b-8c5a-497808ab9136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 systemd[1]: Started Virtual Machine qemu-29-instance-0000004e.
Nov 29 03:10:37 np0005539552 systemd-udevd[265817]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.167 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a82c6144-3ab7-4a0a-bdcc-3447176ce844]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 NetworkManager[48926]: <info>  [1764403837.1787] device (tap6bdd57b3-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:37 np0005539552 NetworkManager[48926]: <info>  [1764403837.1795] device (tap6bdd57b3-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.198 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad1d2a6-52e3-497c-a6fb-3f5b41f32b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 NetworkManager[48926]: <info>  [1764403837.2137] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.212 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2a32b9e4-af85-4d38-9246-4c3d7c2e5bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.243 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed22fde-372e-4e48-8ce3-ac5507153d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.246 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[13c0e377-be21-4099-9777-8a4965489df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 NetworkManager[48926]: <info>  [1764403837.2656] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.271 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d6b7e7-5b70-4871-9f0a-a363f0f07989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.286 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c38347ed-1329-4ddc-a7c4-25820954e632]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688295, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265847, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.299 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[771c6f77-82bd-4d30-b26b-8d26562803f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688295, 'tstamp': 688295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265848, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.314 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6553c721-4ba3-4202-a3cf-2ee5388365a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688295, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265849, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.320 233728 DEBUG nova.compute.manager [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.320 233728 DEBUG oslo_concurrency.lockutils [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.320 233728 DEBUG oslo_concurrency.lockutils [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.320 233728 DEBUG oslo_concurrency.lockutils [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.320 233728 DEBUG nova.compute.manager [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.320 233728 WARNING nova.compute.manager [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:10:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:37.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.343 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc877c6-6ab8-4493-8b3b-28636b428a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.403 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fca66e16-31cd-4b41-a56b-093b0e6cdedb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.404 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.405 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.405 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.406 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:10:37 np0005539552 NetworkManager[48926]: <info>  [1764403837.4078] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.408 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.409 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.410 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.411 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.412 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.413 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6368080b-6753-43f7-b977-da989def90e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.414 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:10:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:37Z|00243|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:37.415 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:37 np0005539552 nova_compute[233724]: 2025-11-29 08:10:37.429 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539552 podman[265881]: 2025-11-29 08:10:37.79714199 +0000 UTC m=+0.053295020 container create 38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:37 np0005539552 systemd[1]: Started libpod-conmon-38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91.scope.
Nov 29 03:10:37 np0005539552 podman[265881]: 2025-11-29 08:10:37.768693502 +0000 UTC m=+0.024846552 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:37 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:10:37 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e7dd029655d97cb32915f54a57e02d21a29e39494d1451c0b5105ac6013617/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:37 np0005539552 podman[265881]: 2025-11-29 08:10:37.882204976 +0000 UTC m=+0.138358046 container init 38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:10:37 np0005539552 podman[265881]: 2025-11-29 08:10:37.890371807 +0000 UTC m=+0.146524837 container start 38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:37 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [NOTICE]   (265900) : New worker (265902) forked
Nov 29 03:10:37 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [NOTICE]   (265900) : Loading success.
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.162 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403838.161946, b373b176-ee91-41a8-a80a-96c957639455 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.164 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.166 233728 DEBUG nova.compute.manager [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.170 233728 INFO nova.virt.libvirt.driver [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance running successfully.#033[00m
Nov 29 03:10:38 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.174 233728 DEBUG nova.virt.libvirt.guest [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.174 233728 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.189 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.193 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.234 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.235 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403838.1634011, b373b176-ee91-41a8-a80a-96c957639455 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.235 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.270 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:38 np0005539552 nova_compute[233724]: 2025-11-29 08:10:38.273 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:38.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:10:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561610012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:10:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:10:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561610012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:10:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:39 np0005539552 nova_compute[233724]: 2025-11-29 08:10:39.442 233728 DEBUG nova.compute.manager [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:39 np0005539552 nova_compute[233724]: 2025-11-29 08:10:39.443 233728 DEBUG oslo_concurrency.lockutils [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:39 np0005539552 nova_compute[233724]: 2025-11-29 08:10:39.445 233728 DEBUG oslo_concurrency.lockutils [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:39 np0005539552 nova_compute[233724]: 2025-11-29 08:10:39.445 233728 DEBUG oslo_concurrency.lockutils [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:39 np0005539552 nova_compute[233724]: 2025-11-29 08:10:39.445 233728 DEBUG nova.compute.manager [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:39 np0005539552 nova_compute[233724]: 2025-11-29 08:10:39.445 233728 WARNING nova.compute.manager [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:10:40 np0005539552 nova_compute[233724]: 2025-11-29 08:10:40.721 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:40.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:41.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:41Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:9b:b1 10.100.0.11
Nov 29 03:10:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:41Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:9b:b1 10.100.0.11
Nov 29 03:10:41 np0005539552 nova_compute[233724]: 2025-11-29 08:10:41.938 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:42.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:43.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 03:10:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:44.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:45.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:45 np0005539552 nova_compute[233724]: 2025-11-29 08:10:45.723 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:46.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:46 np0005539552 nova_compute[233724]: 2025-11-29 08:10:46.941 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:47.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:49.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:50.585 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:50.586 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.725 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.758 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.758 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.758 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.759 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.759 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.760 233728 INFO nova.compute.manager [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Terminating instance#033[00m
Nov 29 03:10:50 np0005539552 nova_compute[233724]: 2025-11-29 08:10:50.760 233728 DEBUG nova.compute.manager [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:50.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539552 kernel: tap316eb9f1-38 (unregistering): left promiscuous mode
Nov 29 03:10:51 np0005539552 NetworkManager[48926]: <info>  [1764403851.1957] device (tap316eb9f1-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.203 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:51Z|00244|binding|INFO|Releasing lport 316eb9f1-38ba-4e49-8475-61d32e3ecc4f from this chassis (sb_readonly=0)
Nov 29 03:10:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:51Z|00245|binding|INFO|Setting lport 316eb9f1-38ba-4e49-8475-61d32e3ecc4f down in Southbound
Nov 29 03:10:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:51Z|00246|binding|INFO|Removing iface tap316eb9f1-38 ovn-installed in OVS
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.205 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.221 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.227 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:9b:b1 10.100.0.11'], port_security=['fa:16:3e:8a:9b:b1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6cfc4165-90ce-407e-8236-34f2147deb51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63a8b230-0938-49d5-bd27-d250421cd1a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '01e87a17aae64e93bdb507d58a515a3f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d51e162-49a6-428f-b147-b19102c65732', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c940b2e-658f-4996-b3f8-38625da2db31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=316eb9f1-38ba-4e49-8475-61d32e3ecc4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.228 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 316eb9f1-38ba-4e49-8475-61d32e3ecc4f in datapath 63a8b230-0938-49d5-bd27-d250421cd1a2 unbound from our chassis#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.230 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 63a8b230-0938-49d5-bd27-d250421cd1a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.231 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[958989a2-cc45-4762-a3c0-0a1e194c4aba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.232 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2 namespace which is not needed anymore#033[00m
Nov 29 03:10:51 np0005539552 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 29 03:10:51 np0005539552 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000004f.scope: Consumed 14.162s CPU time.
Nov 29 03:10:51 np0005539552 systemd-machined[196379]: Machine qemu-28-instance-0000004f terminated.
Nov 29 03:10:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [NOTICE]   (265528) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:51 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [NOTICE]   (265528) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:51 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [WARNING]  (265528) : Exiting Master process...
Nov 29 03:10:51 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [ALERT]    (265528) : Current worker (265530) exited with code 143 (Terminated)
Nov 29 03:10:51 np0005539552 neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2[265523]: [WARNING]  (265528) : All workers exited. Exiting... (0)
Nov 29 03:10:51 np0005539552 systemd[1]: libpod-2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e.scope: Deactivated successfully.
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.395 233728 INFO nova.virt.libvirt.driver [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Instance destroyed successfully.#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.395 233728 DEBUG nova.objects.instance [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lazy-loading 'resources' on Instance uuid 6cfc4165-90ce-407e-8236-34f2147deb51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:51 np0005539552 podman[265983]: 2025-11-29 08:10:51.397165338 +0000 UTC m=+0.059754414 container died 2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.412 233728 DEBUG nova.virt.libvirt.vif [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-937614641',display_name='tempest-ServersTestJSON-server-937614641',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-937614641',id=79,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEC6coBnEzsPsu9uqhkRNWbiAFZrvi4jrlwmuO5/v51K6GFLCyNMqw3JNIcILFAQrHu57Gc0oW4eaFRdy+GSMK/n2f2ovUczJzyoIu/uvpkXVKRLoe5GTeAIsJfd85D8/w==',key_name='tempest-keypair-2052461960',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='01e87a17aae64e93bdb507d58a515a3f',ramdisk_id='',reservation_id='r-gav6qgb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-784160324',owner_user_name='tempest-ServersTestJSON-784160324-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa730ff43688414fbafb9fb85e566a1a',uuid=6cfc4165-90ce-407e-8236-34f2147deb51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.413 233728 DEBUG nova.network.os_vif_util [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Converting VIF {"id": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "address": "fa:16:3e:8a:9b:b1", "network": {"id": "63a8b230-0938-49d5-bd27-d250421cd1a2", "bridge": "br-int", "label": "tempest-ServersTestJSON-149901082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "01e87a17aae64e93bdb507d58a515a3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap316eb9f1-38", "ovs_interfaceid": "316eb9f1-38ba-4e49-8475-61d32e3ecc4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.414 233728 DEBUG nova.network.os_vif_util [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.415 233728 DEBUG os_vif [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.417 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.417 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap316eb9f1-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.421 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.423 233728 INFO os_vif [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:9b:b1,bridge_name='br-int',has_traffic_filtering=True,id=316eb9f1-38ba-4e49-8475-61d32e3ecc4f,network=Network(63a8b230-0938-49d5-bd27-d250421cd1a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap316eb9f1-38')#033[00m
Nov 29 03:10:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:51 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d1cdb1331fc6521a1fafa3ad8b5bcbf5a953a85e69c0d1433966d174abe4bcda-merged.mount: Deactivated successfully.
Nov 29 03:10:51 np0005539552 podman[265983]: 2025-11-29 08:10:51.482445621 +0000 UTC m=+0.145034697 container cleanup 2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:51 np0005539552 systemd[1]: libpod-conmon-2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e.scope: Deactivated successfully.
Nov 29 03:10:51 np0005539552 podman[266042]: 2025-11-29 08:10:51.548555765 +0000 UTC m=+0.045150350 container remove 2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.555 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e4aa5202-d8ab-4e4b-b59b-999c5136b8c1]: (4, ('Sat Nov 29 08:10:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2 (2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e)\n2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e\nSat Nov 29 08:10:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2 (2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e)\n2bfcb4dbd388bec460259985a8cbeeee9ce6d339c422b424ee5f5bbf61e99b7e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.557 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbdad50-71a2-4167-8f95-18553bf9ec63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.558 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a8b230-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.560 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 kernel: tap63a8b230-00: left promiscuous mode
Nov 29 03:10:51 np0005539552 nova_compute[233724]: 2025-11-29 08:10:51.577 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.580 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7fad1c-f271-4882-8c8b-f300767ede7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.587 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.599 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4f035d39-090e-4f52-bdc0-255b0aa9907a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.601 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac783437-f14a-44c9-9e96-186e8626dd8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.615 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[56ebdba2-9dd3-4961-8f50-eef7dd022573]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687285, 'reachable_time': 38078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266057, 'error': None, 'target': 'ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:51 np0005539552 systemd[1]: run-netns-ovnmeta\x2d63a8b230\x2d0938\x2d49d5\x2dbd27\x2dd250421cd1a2.mount: Deactivated successfully.
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.620 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-63a8b230-0938-49d5-bd27-d250421cd1a2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:51.621 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c88ded-2c9a-44e3-9576-8fd62f6ec6f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:52Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:01:d2 10.100.0.4
Nov 29 03:10:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 03:10:52 np0005539552 podman[266060]: 2025-11-29 08:10:52.234968085 +0000 UTC m=+0.061530842 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:10:52 np0005539552 podman[266058]: 2025-11-29 08:10:52.25071805 +0000 UTC m=+0.081238414 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:10:52 np0005539552 podman[266061]: 2025-11-29 08:10:52.26850409 +0000 UTC m=+0.091264164 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller)
Nov 29 03:10:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:52 np0005539552 nova_compute[233724]: 2025-11-29 08:10:52.737 233728 INFO nova.virt.libvirt.driver [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Deleting instance files /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51_del#033[00m
Nov 29 03:10:52 np0005539552 nova_compute[233724]: 2025-11-29 08:10:52.739 233728 INFO nova.virt.libvirt.driver [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Deletion of /var/lib/nova/instances/6cfc4165-90ce-407e-8236-34f2147deb51_del complete#033[00m
Nov 29 03:10:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:52 np0005539552 nova_compute[233724]: 2025-11-29 08:10:52.908 233728 INFO nova.compute.manager [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Took 2.15 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:52 np0005539552 nova_compute[233724]: 2025-11-29 08:10:52.909 233728 DEBUG oslo.service.loopingcall [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:52 np0005539552 nova_compute[233724]: 2025-11-29 08:10:52.909 233728 DEBUG nova.compute.manager [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:52 np0005539552 nova_compute[233724]: 2025-11-29 08:10:52.910 233728 DEBUG nova.network.neutron [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:53.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.622 233728 DEBUG nova.compute.manager [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-vif-unplugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.623 233728 DEBUG oslo_concurrency.lockutils [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.623 233728 DEBUG oslo_concurrency.lockutils [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.623 233728 DEBUG oslo_concurrency.lockutils [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.623 233728 DEBUG nova.compute.manager [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] No waiting events found dispatching network-vif-unplugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.624 233728 DEBUG nova.compute.manager [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-vif-unplugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.624 233728 DEBUG nova.compute.manager [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.624 233728 DEBUG oslo_concurrency.lockutils [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.624 233728 DEBUG oslo_concurrency.lockutils [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.625 233728 DEBUG oslo_concurrency.lockutils [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.625 233728 DEBUG nova.compute.manager [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] No waiting events found dispatching network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.625 233728 WARNING nova.compute.manager [req-4e765d62-381a-4147-88d6-0b4d06df1ce5 req-eea2fece-a05f-4a09-8661-0ab88898c3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received unexpected event network-vif-plugged-316eb9f1-38ba-4e49-8475-61d32e3ecc4f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.825 233728 DEBUG nova.network.neutron [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.859 233728 INFO nova.compute.manager [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Took 1.95 seconds to deallocate network for instance.#033[00m
Nov 29 03:10:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.883 233728 DEBUG nova.compute.manager [req-95835d1c-db95-493a-a136-5c77d2c56b70 req-21a8b2fe-7301-48d5-bc51-88980ac34c3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Received event network-vif-deleted-316eb9f1-38ba-4e49-8475-61d32e3ecc4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.923 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.923 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.952 233728 DEBUG nova.scheduler.client.report [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.976 233728 DEBUG nova.scheduler.client.report [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.977 233728 DEBUG nova.compute.provider_tree [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:10:54 np0005539552 nova_compute[233724]: 2025-11-29 08:10:54.991 233728 DEBUG nova.scheduler.client.report [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.012 233728 DEBUG nova.scheduler.client.report [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.069 233728 DEBUG oslo_concurrency.processutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:55.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4004912679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.539 233728 DEBUG oslo_concurrency.processutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.547 233728 DEBUG nova.compute.provider_tree [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.645 233728 DEBUG nova.scheduler.client.report [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.727 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.758 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:55 np0005539552 nova_compute[233724]: 2025-11-29 08:10:55.819 233728 INFO nova.scheduler.client.report [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Deleted allocations for instance 6cfc4165-90ce-407e-8236-34f2147deb51#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.003 233728 DEBUG oslo_concurrency.lockutils [None req-c9f6e06b-3b20-4f25-9a71-50c4cdb6e4fa aa730ff43688414fbafb9fb85e566a1a 01e87a17aae64e93bdb507d58a515a3f - - default default] Lock "6cfc4165-90ce-407e-8236-34f2147deb51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.237 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.238 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.239 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.239 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.240 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.241 233728 INFO nova.compute.manager [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Terminating instance#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.243 233728 DEBUG nova.compute.manager [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:56 np0005539552 kernel: tap6bdd57b3-15 (unregistering): left promiscuous mode
Nov 29 03:10:56 np0005539552 NetworkManager[48926]: <info>  [1764403856.2992] device (tap6bdd57b3-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:56Z|00247|binding|INFO|Releasing lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 from this chassis (sb_readonly=0)
Nov 29 03:10:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:56Z|00248|binding|INFO|Setting lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 down in Southbound
Nov 29 03:10:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:10:56Z|00249|binding|INFO|Removing iface tap6bdd57b3-15 ovn-installed in OVS
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.353 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.354 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.369 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.403 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:01:d2 10.100.0.4'], port_security=['fa:16:3e:47:01:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b373b176-ee91-41a8-a80a-96c957639455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=6bdd57b3-15f4-46ef-bab3-67925c3606c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.404 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.406 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.407 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fd05e1c0-fbda-4ead-9cc7-9721b9036f18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.408 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.420 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 29 03:10:56 np0005539552 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000004e.scope: Consumed 14.852s CPU time.
Nov 29 03:10:56 np0005539552 systemd-machined[196379]: Machine qemu-29-instance-0000004e terminated.
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.467 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.474 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.483 233728 INFO nova.virt.libvirt.driver [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance destroyed successfully.#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.484 233728 DEBUG nova.objects.instance [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.556 233728 DEBUG nova.virt.libvirt.vif [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:45Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.557 233728 DEBUG nova.network.os_vif_util [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.557 233728 DEBUG nova.network.os_vif_util [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.558 233728 DEBUG os_vif [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.559 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.559 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bdd57b3-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.560 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.563 233728 INFO os_vif [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15')#033[00m
Nov 29 03:10:56 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [NOTICE]   (265900) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:56 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [NOTICE]   (265900) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:56 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [WARNING]  (265900) : Exiting Master process...
Nov 29 03:10:56 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [WARNING]  (265900) : Exiting Master process...
Nov 29 03:10:56 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [ALERT]    (265900) : Current worker (265902) exited with code 143 (Terminated)
Nov 29 03:10:56 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[265896]: [WARNING]  (265900) : All workers exited. Exiting... (0)
Nov 29 03:10:56 np0005539552 systemd[1]: libpod-38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91.scope: Deactivated successfully.
Nov 29 03:10:56 np0005539552 podman[266231]: 2025-11-29 08:10:56.592670528 +0000 UTC m=+0.080946356 container died 38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:10:56 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:56 np0005539552 systemd[1]: var-lib-containers-storage-overlay-57e7dd029655d97cb32915f54a57e02d21a29e39494d1451c0b5105ac6013617-merged.mount: Deactivated successfully.
Nov 29 03:10:56 np0005539552 podman[266231]: 2025-11-29 08:10:56.673722346 +0000 UTC m=+0.161998164 container cleanup 38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:56 np0005539552 systemd[1]: libpod-conmon-38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91.scope: Deactivated successfully.
Nov 29 03:10:56 np0005539552 podman[266280]: 2025-11-29 08:10:56.837029804 +0000 UTC m=+0.141384308 container remove 38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.843 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[63bec85a-6c26-42c0-ad45-dfcb34634478]: (4, ('Sat Nov 29 08:10:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91)\n38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91\nSat Nov 29 08:10:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91)\n38815bd3ea520563ff9b8f70dd9a9d3cd1c99b6e6e3b4c0408dc6a857e54dc91\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.845 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e6c576-6f5c-4583-b37d-ba1418ff7b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.846 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.847 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.862 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 nova_compute[233724]: 2025-11-29 08:10:56.863 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.864 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[242c5658-7787-4401-9c10-ba0df388a1a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.876 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2f718ada-9beb-442a-bd4d-64e673e6f24b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.877 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[529a9835-e3d4-4b11-b3b7-fec221f7c11f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:56.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.892 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b84b20c-5de2-4ced-bdbb-59be97c585dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688288, 'reachable_time': 31531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266295, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:56 np0005539552 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.895 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:10:56.895 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f754cc81-0bc1-4f87-be0f-3eec3363dfbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:57.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:57 np0005539552 nova_compute[233724]: 2025-11-29 08:10:57.814 233728 INFO nova.virt.libvirt.driver [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Deleting instance files /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455_del#033[00m
Nov 29 03:10:57 np0005539552 nova_compute[233724]: 2025-11-29 08:10:57.814 233728 INFO nova.virt.libvirt.driver [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Deletion of /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455_del complete#033[00m
Nov 29 03:10:57 np0005539552 nova_compute[233724]: 2025-11-29 08:10:57.973 233728 INFO nova.compute.manager [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Took 1.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:57 np0005539552 nova_compute[233724]: 2025-11-29 08:10:57.973 233728 DEBUG oslo.service.loopingcall [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:57 np0005539552 nova_compute[233724]: 2025-11-29 08:10:57.974 233728 DEBUG nova.compute.manager [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:57 np0005539552 nova_compute[233724]: 2025-11-29 08:10:57.974 233728 DEBUG nova.network.neutron [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:58.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:10:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:59.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.371 233728 DEBUG nova.compute.manager [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.371 233728 DEBUG oslo_concurrency.lockutils [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.371 233728 DEBUG oslo_concurrency.lockutils [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.372 233728 DEBUG oslo_concurrency.lockutils [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.372 233728 DEBUG nova.compute.manager [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.372 233728 DEBUG nova.compute.manager [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.372 233728 DEBUG nova.compute.manager [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.372 233728 DEBUG oslo_concurrency.lockutils [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.373 233728 DEBUG oslo_concurrency.lockutils [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.373 233728 DEBUG oslo_concurrency.lockutils [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.373 233728 DEBUG nova.compute.manager [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:59 np0005539552 nova_compute[233724]: 2025-11-29 08:10:59.373 233728 WARNING nova.compute.manager [req-6dbdcbea-9d83-4188-832b-6d74f1e3942d req-d4912815-fb9f-4f67-9355-2710bbd58c3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:11:00 np0005539552 nova_compute[233724]: 2025-11-29 08:11:00.243 233728 DEBUG nova.network.neutron [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:00 np0005539552 nova_compute[233724]: 2025-11-29 08:11:00.728 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:00 np0005539552 nova_compute[233724]: 2025-11-29 08:11:00.731 233728 INFO nova.compute.manager [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Took 2.76 seconds to deallocate network for instance.#033[00m
Nov 29 03:11:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:00 np0005539552 nova_compute[233724]: 2025-11-29 08:11:00.907 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:00 np0005539552 nova_compute[233724]: 2025-11-29 08:11:00.908 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:00 np0005539552 nova_compute[233724]: 2025-11-29 08:11:00.912 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:01 np0005539552 nova_compute[233724]: 2025-11-29 08:11:01.119 233728 INFO nova.scheduler.client.report [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocations for instance b373b176-ee91-41a8-a80a-96c957639455#033[00m
Nov 29 03:11:01 np0005539552 nova_compute[233724]: 2025-11-29 08:11:01.198 233728 DEBUG oslo_concurrency.lockutils [None req-7cf53564-c3ca-407e-82b7-a4307debc16b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:01.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:01 np0005539552 nova_compute[233724]: 2025-11-29 08:11:01.463 233728 DEBUG nova.compute.manager [req-69ab4729-e937-4705-b293-0c7b6554e9e2 req-a11ec972-040c-4ba4-a06b-ecf166e2b773 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-deleted-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:01 np0005539552 nova_compute[233724]: 2025-11-29 08:11:01.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:02.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:04.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:05.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:05 np0005539552 nova_compute[233724]: 2025-11-29 08:11:05.759 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539552 nova_compute[233724]: 2025-11-29 08:11:06.393 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403851.3913577, 6cfc4165-90ce-407e-8236-34f2147deb51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:06 np0005539552 nova_compute[233724]: 2025-11-29 08:11:06.393 233728 INFO nova.compute.manager [-] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:11:06 np0005539552 nova_compute[233724]: 2025-11-29 08:11:06.411 233728 DEBUG nova.compute.manager [None req-b46231a7-2e1e-4c90-adc8-fb544f828623 - - - - - -] [instance: 6cfc4165-90ce-407e-8236-34f2147deb51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:06 np0005539552 nova_compute[233724]: 2025-11-29 08:11:06.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539552 nova_compute[233724]: 2025-11-29 08:11:06.562 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539552 nova_compute[233724]: 2025-11-29 08:11:06.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:06.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:07.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:09.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:10 np0005539552 nova_compute[233724]: 2025-11-29 08:11:10.761 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:11:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4222862629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:11:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:11:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4222862629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:11:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:11.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539552 nova_compute[233724]: 2025-11-29 08:11:11.483 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403856.4813807, b373b176-ee91-41a8-a80a-96c957639455 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:11 np0005539552 nova_compute[233724]: 2025-11-29 08:11:11.483 233728 INFO nova.compute.manager [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:11:11 np0005539552 nova_compute[233724]: 2025-11-29 08:11:11.525 233728 DEBUG nova.compute.manager [None req-c9d7db84-09e0-4071-a363-5e46e99d09a7 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:11 np0005539552 nova_compute[233724]: 2025-11-29 08:11:11.564 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:13.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.607 233728 DEBUG nova.compute.manager [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.746 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.746 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.821 233728 DEBUG nova.objects.instance [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_requests' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.857 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.857 233728 INFO nova.compute.claims [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.857 233728 DEBUG nova.objects.instance [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:14 np0005539552 nova_compute[233724]: 2025-11-29 08:11:14.917 233728 DEBUG nova.objects.instance [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.161 233728 INFO nova.compute.resource_tracker [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating resource usage from migration e2865359-f83a-40e7-a7b3-e0d673d76ec8#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.161 233728 DEBUG nova.compute.resource_tracker [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Starting to track incoming migration e2865359-f83a-40e7-a7b3-e0d673d76ec8 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.215 233728 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:15.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3659476537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.661 233728 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.668 233728 DEBUG nova.compute.provider_tree [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.700 233728 DEBUG nova.scheduler.client.report [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.764 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.873 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:15 np0005539552 nova_compute[233724]: 2025-11-29 08:11:15.873 233728 INFO nova.compute.manager [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Migrating#033[00m
Nov 29 03:11:16 np0005539552 nova_compute[233724]: 2025-11-29 08:11:16.566 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:17.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:18 np0005539552 systemd-logind[788]: New session 62 of user nova.
Nov 29 03:11:18 np0005539552 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:11:18 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:11:18 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:11:18 np0005539552 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:11:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:19 np0005539552 systemd[266386]: Queued start job for default target Main User Target.
Nov 29 03:11:19 np0005539552 systemd[266386]: Created slice User Application Slice.
Nov 29 03:11:19 np0005539552 systemd[266386]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:11:19 np0005539552 systemd[266386]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:11:19 np0005539552 systemd[266386]: Reached target Paths.
Nov 29 03:11:19 np0005539552 systemd[266386]: Reached target Timers.
Nov 29 03:11:19 np0005539552 systemd[266386]: Starting D-Bus User Message Bus Socket...
Nov 29 03:11:19 np0005539552 systemd[266386]: Starting Create User's Volatile Files and Directories...
Nov 29 03:11:19 np0005539552 systemd[266386]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:11:19 np0005539552 systemd[266386]: Reached target Sockets.
Nov 29 03:11:19 np0005539552 systemd[266386]: Finished Create User's Volatile Files and Directories.
Nov 29 03:11:19 np0005539552 systemd[266386]: Reached target Basic System.
Nov 29 03:11:19 np0005539552 systemd[266386]: Reached target Main User Target.
Nov 29 03:11:19 np0005539552 systemd[266386]: Startup finished in 124ms.
Nov 29 03:11:19 np0005539552 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:11:19 np0005539552 systemd[1]: Started Session 62 of User nova.
Nov 29 03:11:19 np0005539552 systemd[1]: session-62.scope: Deactivated successfully.
Nov 29 03:11:19 np0005539552 systemd-logind[788]: Session 62 logged out. Waiting for processes to exit.
Nov 29 03:11:19 np0005539552 systemd-logind[788]: Removed session 62.
Nov 29 03:11:19 np0005539552 systemd-logind[788]: New session 64 of user nova.
Nov 29 03:11:19 np0005539552 systemd[1]: Started Session 64 of User nova.
Nov 29 03:11:19 np0005539552 systemd[1]: session-64.scope: Deactivated successfully.
Nov 29 03:11:19 np0005539552 systemd-logind[788]: Session 64 logged out. Waiting for processes to exit.
Nov 29 03:11:19 np0005539552 systemd-logind[788]: Removed session 64.
Nov 29 03:11:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:20.619 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:20.620 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:20.620 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:20 np0005539552 nova_compute[233724]: 2025-11-29 08:11:20.766 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:21.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.568 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.954 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:11:21 np0005539552 nova_compute[233724]: 2025-11-29 08:11:21.955 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2412653847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.366 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:22 np0005539552 podman[266433]: 2025-11-29 08:11:22.472054027 +0000 UTC m=+0.061001648 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:11:22 np0005539552 podman[266432]: 2025-11-29 08:11:22.476463176 +0000 UTC m=+0.066039554 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:11:22 np0005539552 podman[266434]: 2025-11-29 08:11:22.500993598 +0000 UTC m=+0.087883023 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.556 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.557 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4561MB free_disk=20.92176055908203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.558 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.558 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.616 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration for instance 96e45d6d-27c7-4a08-9126-4856b2920133 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.638 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating resource usage from migration e2865359-f83a-40e7-a7b3-e0d673d76ec8#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.638 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Starting to track incoming migration e2865359-f83a-40e7-a7b3-e0d673d76ec8 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.716 233728 WARNING nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 96e45d6d-27c7-4a08-9126-4856b2920133 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.717 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.717 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:11:22 np0005539552 nova_compute[233724]: 2025-11-29 08:11:22.751 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:22.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1028256611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:23 np0005539552 nova_compute[233724]: 2025-11-29 08:11:23.148 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:23 np0005539552 nova_compute[233724]: 2025-11-29 08:11:23.153 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:23 np0005539552 nova_compute[233724]: 2025-11-29 08:11:23.171 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:23 np0005539552 nova_compute[233724]: 2025-11-29 08:11:23.190 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:11:23 np0005539552 nova_compute[233724]: 2025-11-29 08:11:23.191 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:23.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:24.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:25 np0005539552 nova_compute[233724]: 2025-11-29 08:11:25.191 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:25 np0005539552 nova_compute[233724]: 2025-11-29 08:11:25.192 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:25 np0005539552 nova_compute[233724]: 2025-11-29 08:11:25.192 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:25 np0005539552 nova_compute[233724]: 2025-11-29 08:11:25.193 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:11:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:25.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:25 np0005539552 nova_compute[233724]: 2025-11-29 08:11:25.768 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:25 np0005539552 nova_compute[233724]: 2025-11-29 08:11:25.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:26 np0005539552 nova_compute[233724]: 2025-11-29 08:11:26.570 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:26.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:27.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:27 np0005539552 nova_compute[233724]: 2025-11-29 08:11:27.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:27 np0005539552 nova_compute[233724]: 2025-11-29 08:11:27.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:27 np0005539552 nova_compute[233724]: 2025-11-29 08:11:27.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:11:27 np0005539552 nova_compute[233724]: 2025-11-29 08:11:27.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:11:27 np0005539552 nova_compute[233724]: 2025-11-29 08:11:27.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:11:28 np0005539552 nova_compute[233724]: 2025-11-29 08:11:28.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:28 np0005539552 nova_compute[233724]: 2025-11-29 08:11:28.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:29 np0005539552 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:11:29 np0005539552 systemd[266386]: Activating special unit Exit the Session...
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped target Main User Target.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped target Basic System.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped target Paths.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped target Sockets.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped target Timers.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:11:29 np0005539552 systemd[266386]: Closed D-Bus User Message Bus Socket.
Nov 29 03:11:29 np0005539552 systemd[266386]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:11:29 np0005539552 systemd[266386]: Removed slice User Application Slice.
Nov 29 03:11:29 np0005539552 systemd[266386]: Reached target Shutdown.
Nov 29 03:11:29 np0005539552 systemd[266386]: Finished Exit the Session.
Nov 29 03:11:29 np0005539552 systemd[266386]: Reached target Exit the Session.
Nov 29 03:11:29 np0005539552 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:11:29 np0005539552 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:11:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:29.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:29 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:11:29 np0005539552 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:11:29 np0005539552 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:11:29 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:11:29 np0005539552 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:11:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:30 np0005539552 nova_compute[233724]: 2025-11-29 08:11:30.769 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:31.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:31 np0005539552 nova_compute[233724]: 2025-11-29 08:11:31.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.737912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891737972, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2118, "num_deletes": 256, "total_data_size": 4838965, "memory_usage": 4893080, "flush_reason": "Manual Compaction"}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891757186, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 3181158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38383, "largest_seqno": 40496, "table_properties": {"data_size": 3172556, "index_size": 5224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18594, "raw_average_key_size": 20, "raw_value_size": 3154991, "raw_average_value_size": 3455, "num_data_blocks": 228, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403711, "oldest_key_time": 1764403711, "file_creation_time": 1764403891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 19306 microseconds, and 7195 cpu microseconds.
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.757223) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 3181158 bytes OK
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.757240) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.758953) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.758973) EVENT_LOG_v1 {"time_micros": 1764403891758966, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.758989) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4829451, prev total WAL file size 4875597, number of live WAL files 2.
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.760450) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(3106KB)], [72(10MB)]
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891760538, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 13867684, "oldest_snapshot_seqno": -1}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 7302 keys, 13714808 bytes, temperature: kUnknown
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891838273, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 13714808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13662524, "index_size": 32908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 187912, "raw_average_key_size": 25, "raw_value_size": 13528110, "raw_average_value_size": 1852, "num_data_blocks": 1317, "num_entries": 7302, "num_filter_entries": 7302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.838546) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13714808 bytes
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.923369) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.2 rd, 176.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 7833, records dropped: 531 output_compression: NoCompression
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.923403) EVENT_LOG_v1 {"time_micros": 1764403891923391, "job": 44, "event": "compaction_finished", "compaction_time_micros": 77832, "compaction_time_cpu_micros": 30697, "output_level": 6, "num_output_files": 1, "total_output_size": 13714808, "num_input_records": 7833, "num_output_records": 7302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891924271, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891926147, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.760351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.926213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.926217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.926219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.926220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:31.926221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:32.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:33.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.673 233728 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.673 233728 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.674 233728 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.674 233728 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.674 233728 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.674 233728 WARNING nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.674 233728 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.675 233728 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.675 233728 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.675 233728 DEBUG oslo_concurrency.lockutils [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.675 233728 DEBUG nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:34 np0005539552 nova_compute[233724]: 2025-11-29 08:11:34.675 233728 WARNING nova.compute.manager [req-98054204-0a74-456c-9a55-d81a65e4fbaa req-21c32cb2-c7fa-46eb-8514-009bcfe095ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:11:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:35 np0005539552 nova_compute[233724]: 2025-11-29 08:11:35.213 233728 INFO nova.network.neutron [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating port 3690f505-411e-49df-9869-848f5e8e1d1a with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:11:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:35 np0005539552 nova_compute[233724]: 2025-11-29 08:11:35.771 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:36 np0005539552 nova_compute[233724]: 2025-11-29 08:11:36.574 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:36 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:37.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:38 np0005539552 nova_compute[233724]: 2025-11-29 08:11:38.696 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:38 np0005539552 nova_compute[233724]: 2025-11-29 08:11:38.696 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:38 np0005539552 nova_compute[233724]: 2025-11-29 08:11:38.696 233728 DEBUG nova.network.neutron [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:11:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2121524561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:11:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:11:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2121524561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:11:39 np0005539552 nova_compute[233724]: 2025-11-29 08:11:39.221 233728 DEBUG nova.compute.manager [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-changed-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:39 np0005539552 nova_compute[233724]: 2025-11-29 08:11:39.222 233728 DEBUG nova.compute.manager [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Refreshing instance network info cache due to event network-changed-3690f505-411e-49df-9869-848f5e8e1d1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:39 np0005539552 nova_compute[233724]: 2025-11-29 08:11:39.222 233728 DEBUG oslo_concurrency.lockutils [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:39.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:40 np0005539552 nova_compute[233724]: 2025-11-29 08:11:40.772 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:40.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:41.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:41 np0005539552 nova_compute[233724]: 2025-11-29 08:11:41.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:42.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.434 233728 DEBUG nova.network.neutron [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.485 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.488 233728 DEBUG oslo_concurrency.lockutils [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.489 233728 DEBUG nova.network.neutron [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Refreshing network info cache for port 3690f505-411e-49df-9869-848f5e8e1d1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.606 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.607 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.608 233728 INFO nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Creating image(s)#033[00m
Nov 29 03:11:44 np0005539552 nova_compute[233724]: 2025-11-29 08:11:44.647 233728 DEBUG nova.storage.rbd_utils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] creating snapshot(nova-resize) on rbd image(96e45d6d-27c7-4a08-9126-4856b2920133_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:11:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:44.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.051 233728 DEBUG nova.objects.instance [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.156 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.157 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Ensure instance console log exists: /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.157 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.158 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.158 233728 DEBUG oslo_concurrency.lockutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.160 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Start _get_guest_xml network_info=[{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.165 233728 WARNING nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.169 233728 DEBUG nova.virt.libvirt.host [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.170 233728 DEBUG nova.virt.libvirt.host [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.173 233728 DEBUG nova.virt.libvirt.host [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.174 233728 DEBUG nova.virt.libvirt.host [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.175 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.175 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.176 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.176 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.176 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.176 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.177 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.177 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.177 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.177 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.178 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.178 233728 DEBUG nova.virt.hardware [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.178 233728 DEBUG nova.objects.instance [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.192 233728 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1093977405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.637 233728 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.671 233728 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:45 np0005539552 nova_compute[233724]: 2025-11-29 08:11:45.773 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4049277804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.124 233728 DEBUG oslo_concurrency.processutils [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.126 233728 DEBUG nova.virt.libvirt.vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:34Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.127 233728 DEBUG nova.network.os_vif_util [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.129 233728 DEBUG nova.network.os_vif_util [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.134 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <uuid>96e45d6d-27c7-4a08-9126-4856b2920133</uuid>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <name>instance-00000051</name>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <memory>196608</memory>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1237580642</nova:name>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:11:45</nova:creationTime>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.micro">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:memory>192</nova:memory>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <nova:port uuid="3690f505-411e-49df-9869-848f5e8e1d1a">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <entry name="serial">96e45d6d-27c7-4a08-9126-4856b2920133</entry>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <entry name="uuid">96e45d6d-27c7-4a08-9126-4856b2920133</entry>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/96e45d6d-27c7-4a08-9126-4856b2920133_disk">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/96e45d6d-27c7-4a08-9126-4856b2920133_disk.config">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:70:93:84"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <target dev="tap3690f505-41"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133/console.log" append="off"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:11:46 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:11:46 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:11:46 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:11:46 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.136 233728 DEBUG nova.virt.libvirt.vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:34Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.137 233728 DEBUG nova.network.os_vif_util [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:70:93:84"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.137 233728 DEBUG nova.network.os_vif_util [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.138 233728 DEBUG os_vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.138 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.139 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.139 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.142 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.142 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3690f505-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.142 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3690f505-41, col_values=(('external_ids', {'iface-id': '3690f505-411e-49df-9869-848f5e8e1d1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:93:84', 'vm-uuid': '96e45d6d-27c7-4a08-9126-4856b2920133'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.144 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.1446] manager: (tap3690f505-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.152 233728 INFO os_vif [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41')#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.241 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.242 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.242 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:70:93:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.243 233728 INFO nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Using config drive#033[00m
Nov 29 03:11:46 np0005539552 kernel: tap3690f505-41: entered promiscuous mode
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.3285] manager: (tap3690f505-41): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 03:11:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:11:46Z|00250|binding|INFO|Claiming lport 3690f505-411e-49df-9869-848f5e8e1d1a for this chassis.
Nov 29 03:11:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:11:46Z|00251|binding|INFO|3690f505-411e-49df-9869-848f5e8e1d1a: Claiming fa:16:3e:70:93:84 10.100.0.12
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.329 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.334 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 systemd-udevd[266927]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:46 np0005539552 systemd-machined[196379]: New machine qemu-30-instance-00000051.
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.3694] device (tap3690f505-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.368 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:93:84 10.100.0.12'], port_security=['fa:16:3e:70:93:84 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '96e45d6d-27c7-4a08-9126-4856b2920133', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3690f505-411e-49df-9869-848f5e8e1d1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.369 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3690f505-411e-49df-9869-848f5e8e1d1a in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.3702] device (tap3690f505-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.370 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.382 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f06940f0-646e-4783-aa38-e485e780c1bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.383 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:11:46 np0005539552 systemd[1]: Started Virtual Machine qemu-30-instance-00000051.
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.384 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.384 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8fecbbf0-a512-4d31-bdbf-00aef2e969d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.385 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0271bfe7-38a5-41ed-8278-c19c0d452614]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.399 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5e79af-ab46-4971-8c5f-db0836bba618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.399 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:11:46Z|00252|binding|INFO|Setting lport 3690f505-411e-49df-9869-848f5e8e1d1a ovn-installed in OVS
Nov 29 03:11:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:11:46Z|00253|binding|INFO|Setting lport 3690f505-411e-49df-9869-848f5e8e1d1a up in Southbound
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.408 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.423 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[22d05af9-c7b1-466e-83b8-71c6b72eaf65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.452 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[65394e57-8abd-46d9-86ea-f45234671ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 systemd-udevd[266930]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.4589] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.458 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1e64f0-b1c9-4da1-969e-478a379624ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.490 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[31b845bc-65b1-4d4c-8f6e-3fe92c286a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.493 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2823bd-01d4-40e1-abf3-0e545c92f562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.5170] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.525 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[17998fba-489b-4835-816e-86e5ae781985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.538 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a645f6ff-8e95-4c3c-b277-dbc23dd97db3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695220, 'reachable_time': 33321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266961, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.555 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aec5bab8-90ca-446d-b271-0a3a3a111fc4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695220, 'tstamp': 695220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266962, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.568 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6ac4a5-a48f-45e9-abc7-1f26330838cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695220, 'reachable_time': 33321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266963, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.594 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[68233296-3f9b-43a2-8110-28d8044df283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.656 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[70885e34-ad9e-4324-bca0-e8ca7dd96624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.658 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.658 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.659 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:46 np0005539552 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.661 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 NetworkManager[48926]: <info>  [1764403906.6639] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.664 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.665 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:11:46Z|00254|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.679 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.680 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.681 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1193bb-8ac5-4036-ac3c-36114dda5f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.682 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:11:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:46.682 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:11:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:46.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:46 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.998 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403906.9976711, 96e45d6d-27c7-4a08-9126-4856b2920133 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:46.999 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.001 233728 DEBUG nova.compute.manager [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.004 233728 INFO nova.virt.libvirt.driver [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance running successfully.#033[00m
Nov 29 03:11:47 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.007 233728 DEBUG nova.virt.libvirt.guest [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.008 233728 DEBUG nova.virt.libvirt.driver [None req-e8c4a953-fc33-4d54-80d4-b396ba8cc856 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.049 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.053 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:47 np0005539552 podman[267038]: 2025-11-29 08:11:47.013610082 +0000 UTC m=+0.022359325 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:11:47 np0005539552 podman[267038]: 2025-11-29 08:11:47.133323854 +0000 UTC m=+0.142073077 container create baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.149 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.150 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403906.998851, 96e45d6d-27c7-4a08-9126-4856b2920133 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.151 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Started (Lifecycle Event)#033[00m
Nov 29 03:11:47 np0005539552 systemd[1]: Started libpod-conmon-baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32.scope.
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.199 233728 DEBUG nova.network.neutron [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updated VIF entry in instance network info cache for port 3690f505-411e-49df-9869-848f5e8e1d1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.199 233728 DEBUG nova.network.neutron [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [{"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.209 233728 DEBUG nova.compute.manager [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.210 233728 DEBUG oslo_concurrency.lockutils [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.210 233728 DEBUG oslo_concurrency.lockutils [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.211 233728 DEBUG oslo_concurrency.lockutils [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.211 233728 DEBUG nova.compute.manager [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.211 233728 WARNING nova.compute.manager [req-ff8a452c-b8c4-4b12-8efe-3e906b7e9a3e req-593e9f53-3087-4fab-861a-d1e5a5148660 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:11:47 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:11:47 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d851633617543d7660603e393f511f4fabb5df0cc8d4d2c4990bde5d1334ebc2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:47 np0005539552 podman[267038]: 2025-11-29 08:11:47.244022812 +0000 UTC m=+0.252772065 container init baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:11:47 np0005539552 podman[267038]: 2025-11-29 08:11:47.250802745 +0000 UTC m=+0.259551968 container start baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:11:47 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [NOTICE]   (267057) : New worker (267059) forked
Nov 29 03:11:47 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [NOTICE]   (267057) : Loading success.
Nov 29 03:11:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:47.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.470 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.473 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:47 np0005539552 nova_compute[233724]: 2025-11-29 08:11:47.500 233728 DEBUG oslo_concurrency.lockutils [req-d1cd87fb-ae41-40c3-a828-a219517b3ebc req-ec790a63-2c2d-4ea3-98ac-582a503f5ec9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-96e45d6d-27c7-4a08-9126-4856b2920133" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:49 np0005539552 nova_compute[233724]: 2025-11-29 08:11:49.501 233728 DEBUG nova.compute.manager [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:49 np0005539552 nova_compute[233724]: 2025-11-29 08:11:49.502 233728 DEBUG oslo_concurrency.lockutils [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:49 np0005539552 nova_compute[233724]: 2025-11-29 08:11:49.502 233728 DEBUG oslo_concurrency.lockutils [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:49 np0005539552 nova_compute[233724]: 2025-11-29 08:11:49.502 233728 DEBUG oslo_concurrency.lockutils [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:49 np0005539552 nova_compute[233724]: 2025-11-29 08:11:49.502 233728 DEBUG nova.compute.manager [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:49 np0005539552 nova_compute[233724]: 2025-11-29 08:11:49.503 233728 WARNING nova.compute.manager [req-cdf30bb2-a406-4d84-9a5e-50c9f7c5b191 req-44a38a47-f2c0-462e-be5f-479907f82408 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:11:50 np0005539552 nova_compute[233724]: 2025-11-29 08:11:50.775 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:51 np0005539552 nova_compute[233724]: 2025-11-29 08:11:51.143 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:51.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:52.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:53 np0005539552 podman[267073]: 2025-11-29 08:11:53.002033023 +0000 UTC m=+0.079854647 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:11:53 np0005539552 podman[267074]: 2025-11-29 08:11:53.002008732 +0000 UTC m=+0.079863617 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:53.047 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:53.048 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:53 np0005539552 nova_compute[233724]: 2025-11-29 08:11:53.060 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539552 podman[267075]: 2025-11-29 08:11:53.07527037 +0000 UTC m=+0.150593076 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 03:11:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.087 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.088 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.116 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:11:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.255 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.256 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.262 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.262 233728 INFO nova.compute.claims [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.471 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3055270775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.908 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.913 233728 DEBUG nova.compute.provider_tree [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.939 233728 DEBUG nova.scheduler.client.report [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.963 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:54 np0005539552 nova_compute[233724]: 2025-11-29 08:11:54.964 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:11:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:54.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.022 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.023 233728 DEBUG nova.network.neutron [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.047 233728 INFO nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:11:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:11:55.050 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.069 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.209 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.211 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.211 233728 INFO nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Creating image(s)#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.233 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.265 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.285 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.289 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.318 233728 DEBUG nova.policy [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.356 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.357 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.358 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.358 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.382 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.385 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f2279c6c-774a-44e1-854b-d9aae353330e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:55.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.751 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f2279c6c-774a-44e1-854b-d9aae353330e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.829 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.876 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] resizing rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:11:55 np0005539552 nova_compute[233724]: 2025-11-29 08:11:55.995 233728 DEBUG nova.objects.instance [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'migration_context' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:56 np0005539552 nova_compute[233724]: 2025-11-29 08:11:56.013 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:11:56 np0005539552 nova_compute[233724]: 2025-11-29 08:11:56.014 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Ensure instance console log exists: /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:56 np0005539552 nova_compute[233724]: 2025-11-29 08:11:56.014 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:56 np0005539552 nova_compute[233724]: 2025-11-29 08:11:56.015 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:56 np0005539552 nova_compute[233724]: 2025-11-29 08:11:56.015 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:56 np0005539552 nova_compute[233724]: 2025-11-29 08:11:56.144 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.284251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916284368, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 251, "total_data_size": 783461, "memory_usage": 794776, "flush_reason": "Manual Compaction"}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916291235, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 516957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40501, "largest_seqno": 41038, "table_properties": {"data_size": 514062, "index_size": 867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7000, "raw_average_key_size": 19, "raw_value_size": 508197, "raw_average_value_size": 1403, "num_data_blocks": 37, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403891, "oldest_key_time": 1764403891, "file_creation_time": 1764403916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 7024 microseconds, and 3452 cpu microseconds.
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.291288) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 516957 bytes OK
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.291310) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.292985) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.293001) EVENT_LOG_v1 {"time_micros": 1764403916292996, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.293018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 780292, prev total WAL file size 780292, number of live WAL files 2.
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.293838) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(504KB)], [75(13MB)]
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916293904, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 14231765, "oldest_snapshot_seqno": -1}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 7147 keys, 12294088 bytes, temperature: kUnknown
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916369248, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 12294088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12244369, "index_size": 30727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 185417, "raw_average_key_size": 25, "raw_value_size": 12114222, "raw_average_value_size": 1695, "num_data_blocks": 1216, "num_entries": 7147, "num_filter_entries": 7147, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764403916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.369748) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 12294088 bytes
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.371396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.3 rd, 162.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(51.3) write-amplify(23.8) OK, records in: 7664, records dropped: 517 output_compression: NoCompression
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.371415) EVENT_LOG_v1 {"time_micros": 1764403916371406, "job": 46, "event": "compaction_finished", "compaction_time_micros": 75596, "compaction_time_cpu_micros": 27498, "output_level": 6, "num_output_files": 1, "total_output_size": 12294088, "num_input_records": 7664, "num_output_records": 7147, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916372338, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916375939, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.293716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.376032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.376038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.376040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.376041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:11:56.376043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:58 np0005539552 nova_compute[233724]: 2025-11-29 08:11:58.803 233728 DEBUG nova.network.neutron [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Successfully created port: e7660cdd-1f88-4458-9388-8fb207f0a754 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:11:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:00 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:00Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:93:84 10.100.0.12
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.569 233728 DEBUG nova.network.neutron [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Successfully updated port: e7660cdd-1f88-4458-9388-8fb207f0a754 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.588 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.588 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.588 233728 DEBUG nova.network.neutron [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.742 233728 DEBUG nova.compute.manager [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.743 233728 DEBUG nova.compute.manager [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing instance network info cache due to event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.743 233728 DEBUG oslo_concurrency.lockutils [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.788 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.788 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.789 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.789 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.789 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.790 233728 INFO nova.compute.manager [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Terminating instance#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.791 233728 DEBUG nova.compute.manager [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.821 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:00 np0005539552 nova_compute[233724]: 2025-11-29 08:12:00.903 233728 DEBUG nova.network.neutron [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:00.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:01 np0005539552 kernel: tap3690f505-41 (unregistering): left promiscuous mode
Nov 29 03:12:01 np0005539552 NetworkManager[48926]: <info>  [1764403921.0210] device (tap3690f505-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.031 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:01Z|00255|binding|INFO|Releasing lport 3690f505-411e-49df-9869-848f5e8e1d1a from this chassis (sb_readonly=0)
Nov 29 03:12:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:01Z|00256|binding|INFO|Setting lport 3690f505-411e-49df-9869-848f5e8e1d1a down in Southbound
Nov 29 03:12:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:01Z|00257|binding|INFO|Removing iface tap3690f505-41 ovn-installed in OVS
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.033 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.038 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:93:84 10.100.0.12'], port_security=['fa:16:3e:70:93:84 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '96e45d6d-27c7-4a08-9126-4856b2920133', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3690f505-411e-49df-9869-848f5e8e1d1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.039 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3690f505-411e-49df-9869-848f5e8e1d1a in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.041 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.043 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fab0b77e-bc73-4564-a676-68edab483ce5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.043 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.052 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 29 03:12:01 np0005539552 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000051.scope: Consumed 13.802s CPU time.
Nov 29 03:12:01 np0005539552 systemd-machined[196379]: Machine qemu-30-instance-00000051 terminated.
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [NOTICE]   (267057) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:01 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [NOTICE]   (267057) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:01 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [WARNING]  (267057) : Exiting Master process...
Nov 29 03:12:01 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [ALERT]    (267057) : Current worker (267059) exited with code 143 (Terminated)
Nov 29 03:12:01 np0005539552 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[267053]: [WARNING]  (267057) : All workers exited. Exiting... (0)
Nov 29 03:12:01 np0005539552 systemd[1]: libpod-baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32.scope: Deactivated successfully.
Nov 29 03:12:01 np0005539552 conmon[267053]: conmon baca437675ff27f46ffa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32.scope/container/memory.events
Nov 29 03:12:01 np0005539552 podman[267402]: 2025-11-29 08:12:01.180308417 +0000 UTC m=+0.044349568 container died baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:01 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:01 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d851633617543d7660603e393f511f4fabb5df0cc8d4d2c4990bde5d1334ebc2-merged.mount: Deactivated successfully.
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.223 233728 INFO nova.virt.libvirt.driver [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Instance destroyed successfully.#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.224 233728 DEBUG nova.objects.instance [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid 96e45d6d-27c7-4a08-9126-4856b2920133 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:01 np0005539552 podman[267402]: 2025-11-29 08:12:01.229478054 +0000 UTC m=+0.093519205 container cleanup baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:12:01 np0005539552 systemd[1]: libpod-conmon-baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32.scope: Deactivated successfully.
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.251 233728 DEBUG nova.virt.libvirt.vif [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1237580642',display_name='tempest-ServerDiskConfigTestJSON-server-1237580642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1237580642',id=81,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-nfk3svhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:55Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=96e45d6d-27c7-4a08-9126-4856b2920133,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.251 233728 DEBUG nova.network.os_vif_util [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "3690f505-411e-49df-9869-848f5e8e1d1a", "address": "fa:16:3e:70:93:84", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3690f505-41", "ovs_interfaceid": "3690f505-411e-49df-9869-848f5e8e1d1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.252 233728 DEBUG nova.network.os_vif_util [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.252 233728 DEBUG os_vif [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.254 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.254 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3690f505-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.255 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.257 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.259 233728 INFO os_vif [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3690f505-411e-49df-9869-848f5e8e1d1a,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3690f505-41')#033[00m
Nov 29 03:12:01 np0005539552 podman[267442]: 2025-11-29 08:12:01.29378818 +0000 UTC m=+0.042997981 container remove baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.298 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6c482f-013e-4931-9abf-83f122b8dc39]: (4, ('Sat Nov 29 08:12:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32)\nbaca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32\nSat Nov 29 08:12:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (baca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32)\nbaca437675ff27f46ffa8fa4f756ad4948ee0233d0ff456cf3cce4d2e2fa0e32\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.300 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1d8c48-969c-481e-90cd-f875c26db349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.301 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.302 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.304 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.306 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9e6c37-df1c-449e-b463-7ef64a33ea14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.316 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.327 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8c32eb59-7649-43d1-a7d7-78a9926b8415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.328 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[83852e4a-f766-4853-ab74-be3e97f607f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.344 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f63fd1b3-9bbe-42a1-a226-b58f19dc2766]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695213, 'reachable_time': 21624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267474, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.349 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:01.350 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[a97aaa3b-681d-4cf7-b52f-76ea1ad76c95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.661 233728 INFO nova.virt.libvirt.driver [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Deleting instance files /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133_del#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.662 233728 INFO nova.virt.libvirt.driver [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Deletion of /var/lib/nova/instances/96e45d6d-27c7-4a08-9126-4856b2920133_del complete#033[00m
Nov 29 03:12:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.939 233728 INFO nova.compute.manager [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.940 233728 DEBUG oslo.service.loopingcall [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.940 233728 DEBUG nova.compute.manager [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:12:01 np0005539552 nova_compute[233724]: 2025-11-29 08:12:01.940 233728 DEBUG nova.network.neutron [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.318 233728 DEBUG nova.compute.manager [req-79da065f-4634-4eb9-8ad3-58f32e4cb284 req-adaa199e-bcb6-42b7-ba97-8d35fe78988c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.319 233728 DEBUG oslo_concurrency.lockutils [req-79da065f-4634-4eb9-8ad3-58f32e4cb284 req-adaa199e-bcb6-42b7-ba97-8d35fe78988c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.319 233728 DEBUG oslo_concurrency.lockutils [req-79da065f-4634-4eb9-8ad3-58f32e4cb284 req-adaa199e-bcb6-42b7-ba97-8d35fe78988c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.320 233728 DEBUG oslo_concurrency.lockutils [req-79da065f-4634-4eb9-8ad3-58f32e4cb284 req-adaa199e-bcb6-42b7-ba97-8d35fe78988c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.320 233728 DEBUG nova.compute.manager [req-79da065f-4634-4eb9-8ad3-58f32e4cb284 req-adaa199e-bcb6-42b7-ba97-8d35fe78988c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.320 233728 DEBUG nova.compute.manager [req-79da065f-4634-4eb9-8ad3-58f32e4cb284 req-adaa199e-bcb6-42b7-ba97-8d35fe78988c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-unplugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:12:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.884 233728 DEBUG nova.network.neutron [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.908 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.908 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Instance network_info: |[{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.909 233728 DEBUG oslo_concurrency.lockutils [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.909 233728 DEBUG nova.network.neutron [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.911 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Start _get_guest_xml network_info=[{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.915 233728 WARNING nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.921 233728 DEBUG nova.virt.libvirt.host [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.921 233728 DEBUG nova.virt.libvirt.host [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.927 233728 DEBUG nova.virt.libvirt.host [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.928 233728 DEBUG nova.virt.libvirt.host [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.929 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.929 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.929 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.930 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.930 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.930 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.930 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.931 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.931 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.931 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.931 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.932 233728 DEBUG nova.virt.hardware [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:02 np0005539552 nova_compute[233724]: 2025-11-29 08:12:02.934 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.166 233728 DEBUG nova.network.neutron [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.196 233728 INFO nova.compute.manager [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Took 1.26 seconds to deallocate network for instance.#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.238 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.239 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.242 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.271 233728 INFO nova.scheduler.client.report [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocations for instance 96e45d6d-27c7-4a08-9126-4856b2920133#033[00m
Nov 29 03:12:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3330820675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.332 233728 DEBUG oslo_concurrency.lockutils [None req-ec35dec5-1a4c-48c2-af7d-a83ca1f3e63b 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.340 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.367 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.372 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2634126034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.823 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.826 233728 DEBUG nova.virt.libvirt.vif [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.827 233728 DEBUG nova.network.os_vif_util [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.828 233728 DEBUG nova.network.os_vif_util [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.830 233728 DEBUG nova.objects.instance [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_devices' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.862 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <uuid>f2279c6c-774a-44e1-854b-d9aae353330e</uuid>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <name>instance-00000053</name>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:name>tempest-tempest.common.compute-instance-1979756702</nova:name>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:12:02</nova:creationTime>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <nova:port uuid="e7660cdd-1f88-4458-9388-8fb207f0a754">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <entry name="serial">f2279c6c-774a-44e1-854b-d9aae353330e</entry>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <entry name="uuid">f2279c6c-774a-44e1-854b-d9aae353330e</entry>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/f2279c6c-774a-44e1-854b-d9aae353330e_disk">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/f2279c6c-774a-44e1-854b-d9aae353330e_disk.config">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:4b:d3:cf"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <target dev="tape7660cdd-1f"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/console.log" append="off"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:12:03 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:12:03 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:12:03 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:12:03 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.865 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Preparing to wait for external event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.866 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.866 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.866 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.867 233728 DEBUG nova.virt.libvirt.vif [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.867 233728 DEBUG nova.network.os_vif_util [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.868 233728 DEBUG nova.network.os_vif_util [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.868 233728 DEBUG os_vif [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.869 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.869 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.872 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7660cdd-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.872 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7660cdd-1f, col_values=(('external_ids', {'iface-id': 'e7660cdd-1f88-4458-9388-8fb207f0a754', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:d3:cf', 'vm-uuid': 'f2279c6c-774a-44e1-854b-d9aae353330e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539552 NetworkManager[48926]: <info>  [1764403923.8749] manager: (tape7660cdd-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.877 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.879 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.880 233728 INFO os_vif [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f')#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.979 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.979 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.980 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:4b:d3:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:03 np0005539552 nova_compute[233724]: 2025-11-29 08:12:03.980 233728 INFO nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Using config drive#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.005 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.921 233728 DEBUG nova.compute.manager [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.921 233728 DEBUG oslo_concurrency.lockutils [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.922 233728 DEBUG oslo_concurrency.lockutils [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.922 233728 DEBUG oslo_concurrency.lockutils [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "96e45d6d-27c7-4a08-9126-4856b2920133-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.922 233728 DEBUG nova.compute.manager [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] No waiting events found dispatching network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.922 233728 WARNING nova.compute.manager [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received unexpected event network-vif-plugged-3690f505-411e-49df-9869-848f5e8e1d1a for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:12:04 np0005539552 nova_compute[233724]: 2025-11-29 08:12:04.923 233728 DEBUG nova.compute.manager [req-1d158615-84e3-4967-b8a0-0ba2e0c74078 req-5ed82d5f-4c3a-4723-a222-be5a335103d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Received event network-vif-deleted-3690f505-411e-49df-9869-848f5e8e1d1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:04.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.090 233728 INFO nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Creating config drive at /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/disk.config#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.097 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoxqk11e6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.228 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoxqk11e6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.255 233728 DEBUG nova.storage.rbd_utils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image f2279c6c-774a-44e1-854b-d9aae353330e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.259 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/disk.config f2279c6c-774a-44e1-854b-d9aae353330e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.447 233728 DEBUG oslo_concurrency.processutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/disk.config f2279c6c-774a-44e1-854b-d9aae353330e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.448 233728 INFO nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Deleting local config drive /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:05 np0005539552 kernel: tape7660cdd-1f: entered promiscuous mode
Nov 29 03:12:05 np0005539552 NetworkManager[48926]: <info>  [1764403925.5035] manager: (tape7660cdd-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:05Z|00258|binding|INFO|Claiming lport e7660cdd-1f88-4458-9388-8fb207f0a754 for this chassis.
Nov 29 03:12:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:05Z|00259|binding|INFO|e7660cdd-1f88-4458-9388-8fb207f0a754: Claiming fa:16:3e:4b:d3:cf 10.100.0.7
Nov 29 03:12:05 np0005539552 systemd-udevd[267611]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:05 np0005539552 NetworkManager[48926]: <info>  [1764403925.5411] device (tape7660cdd-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:05 np0005539552 NetworkManager[48926]: <info>  [1764403925.5423] device (tape7660cdd-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.556 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:d3:cf 10.100.0.7'], port_security=['fa:16:3e:4b:d3:cf 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f2279c6c-774a-44e1-854b-d9aae353330e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ece3be3-d42e-475f-bdcb-f996b12e4880', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e7660cdd-1f88-4458-9388-8fb207f0a754) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.557 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e7660cdd-1f88-4458-9388-8fb207f0a754 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.559 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:12:05 np0005539552 systemd-machined[196379]: New machine qemu-31-instance-00000053.
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.574 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd3fd06-b3aa-4d22-bc48-f4d1e4dd8f54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.575 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddd8b166-71 in ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.577 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddd8b166-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.577 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f77a42f5-2d99-41b0-be73-4ec3182615ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.578 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba5608e-611b-4186-ae1f-efae0e2076f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.592 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[090b3299-43ca-401a-a3e7-4e70d4e686d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 systemd[1]: Started Virtual Machine qemu-31-instance-00000053.
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.599 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:05Z|00260|binding|INFO|Setting lport e7660cdd-1f88-4458-9388-8fb207f0a754 ovn-installed in OVS
Nov 29 03:12:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:05Z|00261|binding|INFO|Setting lport e7660cdd-1f88-4458-9388-8fb207f0a754 up in Southbound
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.610 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9fffd1-9258-4b62-9af1-a8272fbbef18]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.639 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8af683d7-50c8-4260-bf6c-107efc2cfbf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 systemd-udevd[267613]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:05 np0005539552 NetworkManager[48926]: <info>  [1764403925.6452] manager: (tapddd8b166-70): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.646 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4dd572-591e-4f6a-b737-97c6379c8237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.677 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[83c5efbf-bfa7-4a0a-9472-5a1b79a19899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.680 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[69722a40-0c94-47bd-b4b0-92a0dcb1f43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 NetworkManager[48926]: <info>  [1764403925.7000] device (tapddd8b166-70): carrier: link connected
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.706 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[273a7a8b-d521-47d4-b95d-c0e53b6a0ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.721 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[92d2a14e-9332-451a-964e-4764bd70ed3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267647, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.736 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2f44f70f-b417-4fbd-a754-8d487808347d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:3576'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697138, 'tstamp': 697138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267648, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.753 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[738d4762-b6c5-4b60-b1ce-a01bcf924bb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267649, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.781 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[88763185-a889-4b2c-8e38-73b931aea487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.822 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.836 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6d79b51a-a6ec-4f5e-bb11-6235d911250e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.837 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.837 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.838 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.839 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 kernel: tapddd8b166-70: entered promiscuous mode
Nov 29 03:12:05 np0005539552 NetworkManager[48926]: <info>  [1764403925.8400] manager: (tapddd8b166-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.841 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.845 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:05Z|00262|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.846 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.849 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.850 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b011acdf-b4b4-4d8d-906b-325064db4815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.851 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ddd8b166-79ec-408d-b52c-581ad9dd6cb8
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ddd8b166-79ec-408d-b52c-581ad9dd6cb8
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:05.851 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'env', 'PROCESS_TAG=haproxy-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:05 np0005539552 nova_compute[233724]: 2025-11-29 08:12:05.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.016 233728 DEBUG nova.network.neutron [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated VIF entry in instance network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.020 233728 DEBUG nova.network.neutron [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.054 233728 DEBUG oslo_concurrency.lockutils [req-33c72405-2fd2-4855-a742-114fa9dbd835 req-a9ee8501-b34d-47e3-942f-1a1f78b54fb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:06 np0005539552 podman[267681]: 2025-11-29 08:12:06.247168692 +0000 UTC m=+0.054571804 container create 3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:12:06 np0005539552 systemd[1]: Started libpod-conmon-3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730.scope.
Nov 29 03:12:06 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:12:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5d9b83ba443b69231d2adc990edd3452145e9a404b55e56312800f565dd408/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:06 np0005539552 podman[267681]: 2025-11-29 08:12:06.217991455 +0000 UTC m=+0.025394567 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:06 np0005539552 podman[267681]: 2025-11-29 08:12:06.318687383 +0000 UTC m=+0.126090485 container init 3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:12:06 np0005539552 podman[267681]: 2025-11-29 08:12:06.323932735 +0000 UTC m=+0.131335807 container start 3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:12:06 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [NOTICE]   (267699) : New worker (267702) forked
Nov 29 03:12:06 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [NOTICE]   (267699) : Loading success.
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.848 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403926.848715, f2279c6c-774a-44e1-854b-d9aae353330e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.849 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:06.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.989 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.993 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403926.849241, f2279c6c-774a-44e1-854b-d9aae353330e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:06 np0005539552 nova_compute[233724]: 2025-11-29 08:12:06.993 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.025 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.028 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.053 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.164 233728 DEBUG nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.164 233728 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.165 233728 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.165 233728 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.165 233728 DEBUG nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Processing event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.165 233728 DEBUG nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.166 233728 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.166 233728 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.166 233728 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.166 233728 DEBUG nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.167 233728 WARNING nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received unexpected event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.167 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.171 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403927.1712852, f2279c6c-774a-44e1-854b-d9aae353330e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.171 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.173 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.177 233728 INFO nova.virt.libvirt.driver [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Instance spawned successfully.#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.178 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.192 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.199 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.203 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.204 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.205 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.205 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.206 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.206 233728 DEBUG nova.virt.libvirt.driver [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.236 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.333 233728 INFO nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Took 12.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.334 233728 DEBUG nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.436 233728 INFO nova.compute.manager [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Took 13.23 seconds to build instance.#033[00m
Nov 29 03:12:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:07 np0005539552 nova_compute[233724]: 2025-11-29 08:12:07.459 233728 DEBUG oslo_concurrency.lockutils [None req-2b11ac63-5cda-476c-932b-73a571593f10 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:08 np0005539552 nova_compute[233724]: 2025-11-29 08:12:08.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:08.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:10 np0005539552 nova_compute[233724]: 2025-11-29 08:12:10.826 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:10.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:11.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:12 np0005539552 NetworkManager[48926]: <info>  [1764403932.7657] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 03:12:12 np0005539552 NetworkManager[48926]: <info>  [1764403932.7667] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 03:12:12 np0005539552 nova_compute[233724]: 2025-11-29 08:12:12.779 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539552 nova_compute[233724]: 2025-11-29 08:12:12.892 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:12Z|00263|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:12:12 np0005539552 nova_compute[233724]: 2025-11-29 08:12:12.909 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:12.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:13.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:13 np0005539552 nova_compute[233724]: 2025-11-29 08:12:13.739 233728 DEBUG nova.compute.manager [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:13 np0005539552 nova_compute[233724]: 2025-11-29 08:12:13.740 233728 DEBUG nova.compute.manager [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing instance network info cache due to event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:13 np0005539552 nova_compute[233724]: 2025-11-29 08:12:13.740 233728 DEBUG oslo_concurrency.lockutils [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:13 np0005539552 nova_compute[233724]: 2025-11-29 08:12:13.740 233728 DEBUG oslo_concurrency.lockutils [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:13 np0005539552 nova_compute[233724]: 2025-11-29 08:12:13.740 233728 DEBUG nova.network.neutron [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:13 np0005539552 nova_compute[233724]: 2025-11-29 08:12:13.875 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:14.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:15.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:15 np0005539552 nova_compute[233724]: 2025-11-29 08:12:15.851 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:16 np0005539552 nova_compute[233724]: 2025-11-29 08:12:16.222 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403921.220889, 96e45d6d-27c7-4a08-9126-4856b2920133 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:16 np0005539552 nova_compute[233724]: 2025-11-29 08:12:16.223 233728 INFO nova.compute.manager [-] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:12:16 np0005539552 nova_compute[233724]: 2025-11-29 08:12:16.275 233728 DEBUG nova.compute.manager [None req-832abf9e-0c9a-4117-8437-2113bcf9f97b - - - - - -] [instance: 96e45d6d-27c7-4a08-9126-4856b2920133] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:16.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:17.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:17 np0005539552 nova_compute[233724]: 2025-11-29 08:12:17.845 233728 DEBUG nova.network.neutron [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated VIF entry in instance network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:17 np0005539552 nova_compute[233724]: 2025-11-29 08:12:17.846 233728 DEBUG nova.network.neutron [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:17 np0005539552 nova_compute[233724]: 2025-11-29 08:12:17.879 233728 DEBUG oslo_concurrency.lockutils [req-9593a362-16f1-4983-a5f5-ba47c912f1f2 req-f44c77f0-c70e-4cb8-bb7a-39974e96612e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.444 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.444 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.470 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.651 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.651 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.660 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.660 233728 INFO nova.compute.claims [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.877 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:18 np0005539552 nova_compute[233724]: 2025-11-29 08:12:18.929 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:18.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/608971255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.390 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.397 233728 DEBUG nova.compute.provider_tree [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.415 233728 DEBUG nova.scheduler.client.report [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:19.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.467 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.468 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.543 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.544 233728 DEBUG nova.network.neutron [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.561 233728 INFO nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.582 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.747 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.749 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.749 233728 INFO nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Creating image(s)#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.773 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.801 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.830 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.836 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.903 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.904 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.905 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.906 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.934 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.939 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:19 np0005539552 nova_compute[233724]: 2025-11-29 08:12:19.970 233728 DEBUG nova.policy [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.234 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.308 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] resizing rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.409 233728 DEBUG nova.objects.instance [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'migration_context' on Instance uuid 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.445 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.446 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Ensure instance console log exists: /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.446 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.447 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.447 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:20.620 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:20.621 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:20.621 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:20 np0005539552 nova_compute[233724]: 2025-11-29 08:12:20.852 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:20.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:21.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:21Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:d3:cf 10.100.0.7
Nov 29 03:12:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:21Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:d3:cf 10.100.0.7
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.283 233728 DEBUG nova.network.neutron [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Successfully created port: 834e7d4e-9813-48de-88d0-2da712aaa996 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.961 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:12:22 np0005539552 nova_compute[233724]: 2025-11-29 08:12:22.961 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:23.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/363727392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.398 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:23.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:23 np0005539552 podman[268029]: 2025-11-29 08:12:23.507443285 +0000 UTC m=+0.064228044 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:12:23 np0005539552 podman[268030]: 2025-11-29 08:12:23.534319191 +0000 UTC m=+0.089996280 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:23 np0005539552 podman[268028]: 2025-11-29 08:12:23.534604558 +0000 UTC m=+0.092064115 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.544 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.544 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.696 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.697 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4296MB free_disk=20.900375366210938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.697 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.697 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.841 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance f2279c6c-774a-44e1-854b-d9aae353330e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.842 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.842 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.842 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.880 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539552 nova_compute[233724]: 2025-11-29 08:12:23.968 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1720038028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:24 np0005539552 nova_compute[233724]: 2025-11-29 08:12:24.409 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:24 np0005539552 nova_compute[233724]: 2025-11-29 08:12:24.415 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:24 np0005539552 nova_compute[233724]: 2025-11-29 08:12:24.446 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:24 np0005539552 nova_compute[233724]: 2025-11-29 08:12:24.472 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:12:24 np0005539552 nova_compute[233724]: 2025-11-29 08:12:24.472 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:25.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:25 np0005539552 nova_compute[233724]: 2025-11-29 08:12:25.046 233728 DEBUG nova.network.neutron [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Successfully updated port: 834e7d4e-9813-48de-88d0-2da712aaa996 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:25 np0005539552 nova_compute[233724]: 2025-11-29 08:12:25.068 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:25 np0005539552 nova_compute[233724]: 2025-11-29 08:12:25.068 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:25 np0005539552 nova_compute[233724]: 2025-11-29 08:12:25.069 233728 DEBUG nova.network.neutron [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:25 np0005539552 nova_compute[233724]: 2025-11-29 08:12:25.431 233728 DEBUG nova.network.neutron [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:25.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:25 np0005539552 nova_compute[233724]: 2025-11-29 08:12:25.853 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.473 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.474 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.475 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.475 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.652 233728 DEBUG nova.compute.manager [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.652 233728 DEBUG nova.compute.manager [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing instance network info cache due to event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.652 233728 DEBUG oslo_concurrency.lockutils [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:26 np0005539552 nova_compute[233724]: 2025-11-29 08:12:26.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:27.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.760 233728 DEBUG nova.network.neutron [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.803 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.804 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Instance network_info: |[{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.804 233728 DEBUG oslo_concurrency.lockutils [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.805 233728 DEBUG nova.network.neutron [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.808 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Start _get_guest_xml network_info=[{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.812 233728 WARNING nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.824 233728 DEBUG nova.virt.libvirt.host [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.824 233728 DEBUG nova.virt.libvirt.host [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.837 233728 DEBUG nova.virt.libvirt.host [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.838 233728 DEBUG nova.virt.libvirt.host [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.839 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.839 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.840 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.840 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.840 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.840 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.840 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.841 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.841 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.841 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.841 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.842 233728 DEBUG nova.virt.hardware [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:27 np0005539552 nova_compute[233724]: 2025-11-29 08:12:27.844 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3898060549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.276 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.300 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.304 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3106984920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.773 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.775 233728 DEBUG nova.virt.libvirt.vif [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.775 233728 DEBUG nova.network.os_vif_util [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.776 233728 DEBUG nova.network.os_vif_util [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.777 233728 DEBUG nova.objects.instance [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:12:28 np0005539552 nova_compute[233724]: 2025-11-29 08:12:28.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:12:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:29.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.120 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <uuid>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</uuid>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <name>instance-00000056</name>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:name>tempest-tempest.common.compute-instance-1879739874</nova:name>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:12:27</nova:creationTime>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <nova:port uuid="834e7d4e-9813-48de-88d0-2da712aaa996">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <entry name="serial">7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</entry>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <entry name="uuid">7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</entry>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:a8:eb:8c"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <target dev="tap834e7d4e-98"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/console.log" append="off"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:12:29 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:12:29 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:12:29 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:12:29 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.121 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Preparing to wait for external event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.122 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.122 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.122 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.123 233728 DEBUG nova.virt.libvirt.vif [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.124 233728 DEBUG nova.network.os_vif_util [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.124 233728 DEBUG nova.network.os_vif_util [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.125 233728 DEBUG os_vif [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.126 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.127 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.129 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.130 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap834e7d4e-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.130 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap834e7d4e-98, col_values=(('external_ids', {'iface-id': '834e7d4e-9813-48de-88d0-2da712aaa996', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:eb:8c', 'vm-uuid': '7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:29 np0005539552 NetworkManager[48926]: <info>  [1764403949.1329] manager: (tap834e7d4e-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.135 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.139 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.141 233728 INFO os_vif [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98')#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.304 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:12:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.626 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.626 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.627 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:a8:eb:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.627 233728 INFO nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Using config drive#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.660 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.673 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.673 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.674 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:12:29 np0005539552 nova_compute[233724]: 2025-11-29 08:12:29.674 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:30 np0005539552 nova_compute[233724]: 2025-11-29 08:12:30.856 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:31.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:31.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.607 233728 INFO nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Creating config drive at /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/disk.config#033[00m
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.616 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpghbq_pe7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.752 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpghbq_pe7" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.779 233728 DEBUG nova.storage.rbd_utils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.782 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/disk.config 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.945 233728 DEBUG oslo_concurrency.processutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/disk.config 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:31 np0005539552 nova_compute[233724]: 2025-11-29 08:12:31.946 233728 INFO nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Deleting local config drive /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:31 np0005539552 kernel: tap834e7d4e-98: entered promiscuous mode
Nov 29 03:12:32 np0005539552 NetworkManager[48926]: <info>  [1764403952.0003] manager: (tap834e7d4e-98): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 03:12:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:32Z|00264|binding|INFO|Claiming lport 834e7d4e-9813-48de-88d0-2da712aaa996 for this chassis.
Nov 29 03:12:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:32Z|00265|binding|INFO|834e7d4e-9813-48de-88d0-2da712aaa996: Claiming fa:16:3e:a8:eb:8c 10.100.0.11
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.001 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.007 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:eb:8c 10.100.0.11'], port_security=['fa:16:3e:a8:eb:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ece3be3-d42e-475f-bdcb-f996b12e4880', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=834e7d4e-9813-48de-88d0-2da712aaa996) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.008 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 834e7d4e-9813-48de-88d0-2da712aaa996 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.010 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:12:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:32Z|00266|binding|INFO|Setting lport 834e7d4e-9813-48de-88d0-2da712aaa996 ovn-installed in OVS
Nov 29 03:12:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:32Z|00267|binding|INFO|Setting lport 834e7d4e-9813-48de-88d0-2da712aaa996 up in Southbound
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.017 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.027 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2939c603-09e2-4221-8456-231a14d515ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:32 np0005539552 systemd-machined[196379]: New machine qemu-32-instance-00000056.
Nov 29 03:12:32 np0005539552 systemd-udevd[268249]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:32 np0005539552 systemd[1]: Started Virtual Machine qemu-32-instance-00000056.
Nov 29 03:12:32 np0005539552 NetworkManager[48926]: <info>  [1764403952.0546] device (tap834e7d4e-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:32 np0005539552 NetworkManager[48926]: <info>  [1764403952.0554] device (tap834e7d4e-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.058 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[698289c8-ba21-41d1-967c-d551012f7475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.061 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8224c26a-f169-4e6e-96af-ee2057c5b21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.088 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[aa19ab4c-9ffe-420e-b71f-f9a2f62c710d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.103 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a787c788-f6b4-43ad-8dcf-236b64bc3217]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268260, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.118 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[52d95529-de94-46e8-a6bf-929b42b320f3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697149, 'tstamp': 697149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268262, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697151, 'tstamp': 697151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268262, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.119 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.121 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.123 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:32.123 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.444 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403952.443985, 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.444 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.639 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.643 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403952.4474263, 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.643 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.672 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.674 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.747 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.883 233728 DEBUG nova.network.neutron [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updated VIF entry in instance network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.883 233728 DEBUG nova.network.neutron [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:32 np0005539552 nova_compute[233724]: 2025-11-29 08:12:32.910 233728 DEBUG oslo_concurrency.lockutils [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:33.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.537 233728 DEBUG nova.compute.manager [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.538 233728 DEBUG oslo_concurrency.lockutils [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.538 233728 DEBUG oslo_concurrency.lockutils [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.539 233728 DEBUG oslo_concurrency.lockutils [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.539 233728 DEBUG nova.compute.manager [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Processing event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.539 233728 DEBUG nova.compute.manager [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.539 233728 DEBUG oslo_concurrency.lockutils [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.540 233728 DEBUG oslo_concurrency.lockutils [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.540 233728 DEBUG oslo_concurrency.lockutils [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.540 233728 DEBUG nova.compute.manager [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.540 233728 WARNING nova.compute.manager [req-5e0c0e20-9e8e-4c3f-a6b2-a523aafe0424 req-0ebeea98-f495-4dc3-a7ca-aee425d756d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received unexpected event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.541 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.545 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764403953.5443246, 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.546 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.551 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.555 233728 INFO nova.virt.libvirt.driver [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Instance spawned successfully.#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.555 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.564 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.569 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.572 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.572 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.573 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.573 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.573 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.573 233728 DEBUG nova.virt.libvirt.driver [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.598 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.645 233728 INFO nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Took 13.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.646 233728 DEBUG nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.702 233728 INFO nova.compute.manager [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Took 15.10 seconds to build instance.#033[00m
Nov 29 03:12:33 np0005539552 nova_compute[233724]: 2025-11-29 08:12:33.721 233728 DEBUG oslo_concurrency.lockutils [None req-e9ffda60-7852-4716-8671-982dfe0867e9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:34 np0005539552 nova_compute[233724]: 2025-11-29 08:12:34.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:34 np0005539552 nova_compute[233724]: 2025-11-29 08:12:34.879 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:34 np0005539552 nova_compute[233724]: 2025-11-29 08:12:34.902 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:34 np0005539552 nova_compute[233724]: 2025-11-29 08:12:34.902 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:12:34 np0005539552 nova_compute[233724]: 2025-11-29 08:12:34.902 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:34 np0005539552 nova_compute[233724]: 2025-11-29 08:12:34.903 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:35 np0005539552 nova_compute[233724]: 2025-11-29 08:12:35.859 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:35 np0005539552 nova_compute[233724]: 2025-11-29 08:12:35.898 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:35 np0005539552 nova_compute[233724]: 2025-11-29 08:12:35.899 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:36 np0005539552 nova_compute[233724]: 2025-11-29 08:12:36.210 233728 DEBUG nova.compute.manager [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:36 np0005539552 nova_compute[233724]: 2025-11-29 08:12:36.210 233728 DEBUG nova.compute.manager [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing instance network info cache due to event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:36 np0005539552 nova_compute[233724]: 2025-11-29 08:12:36.211 233728 DEBUG oslo_concurrency.lockutils [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:36 np0005539552 nova_compute[233724]: 2025-11-29 08:12:36.211 233728 DEBUG oslo_concurrency.lockutils [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:36 np0005539552 nova_compute[233724]: 2025-11-29 08:12:36.211 233728 DEBUG nova.network.neutron [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:38 np0005539552 nova_compute[233724]: 2025-11-29 08:12:38.306 233728 DEBUG nova.compute.manager [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:38 np0005539552 nova_compute[233724]: 2025-11-29 08:12:38.307 233728 DEBUG nova.compute.manager [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing instance network info cache due to event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:38 np0005539552 nova_compute[233724]: 2025-11-29 08:12:38.308 233728 DEBUG oslo_concurrency.lockutils [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:38 np0005539552 nova_compute[233724]: 2025-11-29 08:12:38.308 233728 DEBUG oslo_concurrency.lockutils [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:38 np0005539552 nova_compute[233724]: 2025-11-29 08:12:38.308 233728 DEBUG nova.network.neutron [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1479288165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:12:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1479288165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:12:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:39.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:39 np0005539552 nova_compute[233724]: 2025-11-29 08:12:39.089 233728 DEBUG nova.network.neutron [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated VIF entry in instance network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:39 np0005539552 nova_compute[233724]: 2025-11-29 08:12:39.090 233728 DEBUG nova.network.neutron [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:39 np0005539552 nova_compute[233724]: 2025-11-29 08:12:39.113 233728 DEBUG oslo_concurrency.lockutils [req-83ecc622-8b7a-433b-92c2-e2352887cc62 req-60ab9149-ebac-4c6b-b5ff-93ac97167cf8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:39 np0005539552 nova_compute[233724]: 2025-11-29 08:12:39.134 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:39Z|00268|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:12:39 np0005539552 nova_compute[233724]: 2025-11-29 08:12:39.357 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.202 233728 DEBUG nova.network.neutron [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updated VIF entry in instance network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.202 233728 DEBUG nova.network.neutron [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.241 233728 DEBUG oslo_concurrency.lockutils [req-926e6608-52c5-4aca-ba98-77e3f0f51785 req-1e1216f0-f202-4da6-96c5-e77f7ebf58f7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.330 233728 DEBUG nova.compute.manager [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.330 233728 DEBUG nova.compute.manager [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing instance network info cache due to event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.331 233728 DEBUG oslo_concurrency.lockutils [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.331 233728 DEBUG oslo_concurrency.lockutils [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.331 233728 DEBUG nova.network.neutron [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:40 np0005539552 nova_compute[233724]: 2025-11-29 08:12:40.861 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:41 np0005539552 nova_compute[233724]: 2025-11-29 08:12:41.001 233728 DEBUG oslo_concurrency.lockutils [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-f2279c6c-774a-44e1-854b-d9aae353330e-adedd23a-8a64-4167-a4ad-7d586b717068" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:41 np0005539552 nova_compute[233724]: 2025-11-29 08:12:41.001 233728 DEBUG oslo_concurrency.lockutils [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-f2279c6c-774a-44e1-854b-d9aae353330e-adedd23a-8a64-4167-a4ad-7d586b717068" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:41 np0005539552 nova_compute[233724]: 2025-11-29 08:12:41.002 233728 DEBUG nova.objects.instance [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:42 np0005539552 nova_compute[233724]: 2025-11-29 08:12:42.934 233728 DEBUG nova.compute.manager [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:42 np0005539552 nova_compute[233724]: 2025-11-29 08:12:42.935 233728 DEBUG nova.compute.manager [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing instance network info cache due to event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:42 np0005539552 nova_compute[233724]: 2025-11-29 08:12:42.935 233728 DEBUG oslo_concurrency.lockutils [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:42 np0005539552 nova_compute[233724]: 2025-11-29 08:12:42.935 233728 DEBUG oslo_concurrency.lockutils [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:42 np0005539552 nova_compute[233724]: 2025-11-29 08:12:42.935 233728 DEBUG nova.network.neutron [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:43 np0005539552 nova_compute[233724]: 2025-11-29 08:12:43.257 233728 DEBUG nova.network.neutron [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updated VIF entry in instance network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:43 np0005539552 nova_compute[233724]: 2025-11-29 08:12:43.257 233728 DEBUG nova.network.neutron [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:43 np0005539552 nova_compute[233724]: 2025-11-29 08:12:43.279 233728 DEBUG oslo_concurrency.lockutils [req-96e2c539-928d-44cb-b5f5-eabfcb44bb6f req-fc078507-dd4d-4ac9-b5d8-8124ace47848 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:43 np0005539552 nova_compute[233724]: 2025-11-29 08:12:43.286 233728 DEBUG nova.objects.instance [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_requests' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:43 np0005539552 nova_compute[233724]: 2025-11-29 08:12:43.300 233728 DEBUG nova.network.neutron [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:43 np0005539552 nova_compute[233724]: 2025-11-29 08:12:43.714 233728 DEBUG nova.policy [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.135 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.527 233728 DEBUG nova.network.neutron [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated VIF entry in instance network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.528 233728 DEBUG nova.network.neutron [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.548 233728 DEBUG oslo_concurrency.lockutils [req-2d1bd7ca-a7d4-4ede-94f5-0c9eacbf83b9 req-90ff04ec-e320-49bd-b7ca-5fac7888cfb4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.696 233728 DEBUG nova.network.neutron [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Successfully updated port: adedd23a-8a64-4167-a4ad-7d586b717068 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.713 233728 DEBUG oslo_concurrency.lockutils [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.714 233728 DEBUG oslo_concurrency.lockutils [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.714 233728 DEBUG nova.network.neutron [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:44 np0005539552 nova_compute[233724]: 2025-11-29 08:12:44.862 233728 WARNING nova.network.neutron [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:12:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:45 np0005539552 nova_compute[233724]: 2025-11-29 08:12:45.863 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:46 np0005539552 nova_compute[233724]: 2025-11-29 08:12:46.366 233728 DEBUG nova.compute.manager [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-changed-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:46 np0005539552 nova_compute[233724]: 2025-11-29 08:12:46.367 233728 DEBUG nova.compute.manager [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing instance network info cache due to event network-changed-adedd23a-8a64-4167-a4ad-7d586b717068. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:46 np0005539552 nova_compute[233724]: 2025-11-29 08:12:46.367 233728 DEBUG oslo_concurrency.lockutils [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.180 233728 DEBUG nova.network.neutron [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:47Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:eb:8c 10.100.0.11
Nov 29 03:12:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:47Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:eb:8c 10.100.0.11
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.227 233728 DEBUG oslo_concurrency.lockutils [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.228 233728 DEBUG oslo_concurrency.lockutils [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.228 233728 DEBUG nova.network.neutron [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing network info cache for port adedd23a-8a64-4167-a4ad-7d586b717068 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.232 233728 DEBUG nova.virt.libvirt.vif [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:07Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.232 233728 DEBUG nova.network.os_vif_util [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.233 233728 DEBUG nova.network.os_vif_util [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.233 233728 DEBUG os_vif [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.234 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.234 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.234 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.236 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.236 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadedd23a-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.237 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapadedd23a-8a, col_values=(('external_ids', {'iface-id': 'adedd23a-8a64-4167-a4ad-7d586b717068', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:12:77', 'vm-uuid': 'f2279c6c-774a-44e1-854b-d9aae353330e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.238 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 NetworkManager[48926]: <info>  [1764403967.2389] manager: (tapadedd23a-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.239 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.245 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.246 233728 INFO os_vif [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a')#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.247 233728 DEBUG nova.virt.libvirt.vif [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:07Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.247 233728 DEBUG nova.network.os_vif_util [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.248 233728 DEBUG nova.network.os_vif_util [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.250 233728 DEBUG nova.virt.libvirt.guest [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:48:12:77"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <target dev="tapadedd23a-8a"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:12:47 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:12:47 np0005539552 kernel: tapadedd23a-8a: entered promiscuous mode
Nov 29 03:12:47 np0005539552 NetworkManager[48926]: <info>  [1764403967.2631] manager: (tapadedd23a-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 03:12:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:47Z|00269|binding|INFO|Claiming lport adedd23a-8a64-4167-a4ad-7d586b717068 for this chassis.
Nov 29 03:12:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:47Z|00270|binding|INFO|adedd23a-8a64-4167-a4ad-7d586b717068: Claiming fa:16:3e:48:12:77 10.100.0.10
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.265 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.275 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:12:77 10.100.0.10'], port_security=['fa:16:3e:48:12:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f2279c6c-774a-44e1-854b-d9aae353330e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=adedd23a-8a64-4167-a4ad-7d586b717068) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.276 143400 INFO neutron.agent.ovn.metadata.agent [-] Port adedd23a-8a64-4167-a4ad-7d586b717068 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.277 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:12:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:47Z|00271|binding|INFO|Setting lport adedd23a-8a64-4167-a4ad-7d586b717068 ovn-installed in OVS
Nov 29 03:12:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:47Z|00272|binding|INFO|Setting lport adedd23a-8a64-4167-a4ad-7d586b717068 up in Southbound
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.282 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.285 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.300 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4dc61c-6ff5-4fa9-9fd6-729d59b4a1f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:47 np0005539552 systemd-udevd[268555]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:47 np0005539552 NetworkManager[48926]: <info>  [1764403967.3280] device (tapadedd23a-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:47 np0005539552 NetworkManager[48926]: <info>  [1764403967.3290] device (tapadedd23a-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.341 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[73fe8597-0926-4839-8455-5b9074e6e7ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.344 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8b78907e-f35e-4f0e-b876-0ade17fe5387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.352 233728 DEBUG nova.virt.libvirt.driver [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.353 233728 DEBUG nova.virt.libvirt.driver [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.353 233728 DEBUG nova.virt.libvirt.driver [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:4b:d3:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.354 233728 DEBUG nova.virt.libvirt.driver [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:48:12:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.376 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0f2a1f-e7ba-40b3-8943-bdc695ab751a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.383 233728 DEBUG nova.virt.libvirt.guest [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1979756702</nova:name>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:12:47</nova:creationTime>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:port uuid="e7660cdd-1f88-4458-9388-8fb207f0a754">
Nov 29 03:12:47 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    <nova:port uuid="adedd23a-8a64-4167-a4ad-7d586b717068">
Nov 29 03:12:47 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:47 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:12:47 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:12:47 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.394 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ba15210a-4642-4c02-b218-6f54ff130f85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268561, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.410 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b591a547-a7a1-477b-9a97-043f6b5d390f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697149, 'tstamp': 697149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268562, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697151, 'tstamp': 697151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268562, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.412 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.416 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.417 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.417 233728 DEBUG oslo_concurrency.lockutils [None req-6bd00564-f329-41cf-bba4-2f819b78fbd6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-f2279c6c-774a-44e1-854b-d9aae353330e-adedd23a-8a64-4167-a4ad-7d586b717068" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.417 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:47.418 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:47 np0005539552 nova_compute[233724]: 2025-11-29 08:12:47.419 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:47.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.095 233728 DEBUG nova.compute.manager [req-127184ba-8154-44bf-a02c-055d4b30c1df req-878d819a-f098-4d6c-98d3-e181d1de274e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.096 233728 DEBUG oslo_concurrency.lockutils [req-127184ba-8154-44bf-a02c-055d4b30c1df req-878d819a-f098-4d6c-98d3-e181d1de274e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.096 233728 DEBUG oslo_concurrency.lockutils [req-127184ba-8154-44bf-a02c-055d4b30c1df req-878d819a-f098-4d6c-98d3-e181d1de274e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.096 233728 DEBUG oslo_concurrency.lockutils [req-127184ba-8154-44bf-a02c-055d4b30c1df req-878d819a-f098-4d6c-98d3-e181d1de274e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.096 233728 DEBUG nova.compute.manager [req-127184ba-8154-44bf-a02c-055d4b30c1df req-878d819a-f098-4d6c-98d3-e181d1de274e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.096 233728 WARNING nova.compute.manager [req-127184ba-8154-44bf-a02c-055d4b30c1df req-878d819a-f098-4d6c-98d3-e181d1de274e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received unexpected event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:48Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:12:77 10.100.0.10
Nov 29 03:12:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:48Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:12:77 10.100.0.10
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.954 233728 DEBUG oslo_concurrency.lockutils [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-f2279c6c-774a-44e1-854b-d9aae353330e-adedd23a-8a64-4167-a4ad-7d586b717068" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.954 233728 DEBUG oslo_concurrency.lockutils [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-f2279c6c-774a-44e1-854b-d9aae353330e-adedd23a-8a64-4167-a4ad-7d586b717068" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:48 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.975 233728 DEBUG nova.objects.instance [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.999 233728 DEBUG nova.virt.libvirt.vif [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:07Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:48.999 233728 DEBUG nova.network.os_vif_util [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.000 233728 DEBUG nova.network.os_vif_util [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.003 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.005 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.007 233728 DEBUG nova.virt.libvirt.driver [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Attempting to detach device tapadedd23a-8a from instance f2279c6c-774a-44e1-854b-d9aae353330e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.007 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:48:12:77"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <target dev="tapadedd23a-8a"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.013 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.016 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <name>instance-00000053</name>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <uuid>f2279c6c-774a-44e1-854b-d9aae353330e</uuid>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1979756702</nova:name>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:12:47</nova:creationTime>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:port uuid="e7660cdd-1f88-4458-9388-8fb207f0a754">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:port uuid="adedd23a-8a64-4167-a4ad-7d586b717068">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='serial'>f2279c6c-774a-44e1-854b-d9aae353330e</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='uuid'>f2279c6c-774a-44e1-854b-d9aae353330e</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/f2279c6c-774a-44e1-854b-d9aae353330e_disk' index='2'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/f2279c6c-774a-44e1-854b-d9aae353330e_disk.config' index='1'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:4b:d3:cf'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='tape7660cdd-1f'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:48:12:77'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='tapadedd23a-8a'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='net1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source path='/dev/pts/0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/console.log' append='off'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source path='/dev/pts/0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/console.log' append='off'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c379,c607</label>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c607</imagelabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.017 233728 INFO nova.virt.libvirt.driver [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapadedd23a-8a from instance f2279c6c-774a-44e1-854b-d9aae353330e from the persistent domain config.#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.018 233728 DEBUG nova.virt.libvirt.driver [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] (1/8): Attempting to detach device tapadedd23a-8a with device alias net1 from instance f2279c6c-774a-44e1-854b-d9aae353330e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.018 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:48:12:77"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <target dev="tapadedd23a-8a"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:12:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:49 np0005539552 kernel: tapadedd23a-8a (unregistering): left promiscuous mode
Nov 29 03:12:49 np0005539552 NetworkManager[48926]: <info>  [1764403969.1180] device (tapadedd23a-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:49Z|00273|binding|INFO|Releasing lport adedd23a-8a64-4167-a4ad-7d586b717068 from this chassis (sb_readonly=0)
Nov 29 03:12:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:49Z|00274|binding|INFO|Setting lport adedd23a-8a64-4167-a4ad-7d586b717068 down in Southbound
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:12:49Z|00275|binding|INFO|Removing iface tapadedd23a-8a ovn-installed in OVS
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.128 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.131 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764403969.1308846, f2279c6c-774a-44e1-854b-d9aae353330e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.132 233728 DEBUG nova.virt.libvirt.driver [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Start waiting for the detach event from libvirt for device tapadedd23a-8a with device alias net1 for instance f2279c6c-774a-44e1-854b-d9aae353330e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.133 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.133 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:12:77 10.100.0.10'], port_security=['fa:16:3e:48:12:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f2279c6c-774a-44e1-854b-d9aae353330e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=adedd23a-8a64-4167-a4ad-7d586b717068) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.135 143400 INFO neutron.agent.ovn.metadata.agent [-] Port adedd23a-8a64-4167-a4ad-7d586b717068 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.136 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <name>instance-00000053</name>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <uuid>f2279c6c-774a-44e1-854b-d9aae353330e</uuid>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1979756702</nova:name>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:12:47</nova:creationTime>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:port uuid="e7660cdd-1f88-4458-9388-8fb207f0a754">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:port uuid="adedd23a-8a64-4167-a4ad-7d586b717068">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='serial'>f2279c6c-774a-44e1-854b-d9aae353330e</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='uuid'>f2279c6c-774a-44e1-854b-d9aae353330e</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/f2279c6c-774a-44e1-854b-d9aae353330e_disk' index='2'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/f2279c6c-774a-44e1-854b-d9aae353330e_disk.config' index='1'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.137 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:4b:d3:cf'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target dev='tape7660cdd-1f'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source path='/dev/pts/0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/console.log' append='off'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <source path='/dev/pts/0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e/console.log' append='off'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c379,c607</label>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c607</imagelabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.136 233728 INFO nova.virt.libvirt.driver [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapadedd23a-8a from instance f2279c6c-774a-44e1-854b-d9aae353330e from the live domain config.#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.137 233728 DEBUG nova.virt.libvirt.vif [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:07Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.137 233728 DEBUG nova.network.os_vif_util [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.138 233728 DEBUG nova.network.os_vif_util [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.138 233728 DEBUG os_vif [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.140 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.140 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadedd23a-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.143 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.145 233728 INFO os_vif [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a')#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.146 233728 DEBUG nova.virt.libvirt.guest [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1979756702</nova:name>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:12:49</nova:creationTime>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    <nova:port uuid="e7660cdd-1f88-4458-9388-8fb207f0a754">
Nov 29 03:12:49 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:12:49 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:12:49 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.154 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[87dde9fa-260f-4e9c-a33a-73250a203659]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.183 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f178c4d5-c3aa-41c1-9786-e878e2422ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.186 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[893bda65-650e-4d66-a87d-be034ca07818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.215 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7e01e1bf-2afe-4ece-9dbf-d993cec276a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.234 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[598ed4b2-9f0a-46af-bcf2-e00a4c58a966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268574, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.256 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[94a586a9-c4af-49d0-99c3-476c7c0b04ab]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697149, 'tstamp': 697149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268575, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697151, 'tstamp': 697151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268575, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.259 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539552 nova_compute[233724]: 2025-11-29 08:12:49.261 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.262 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.262 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.263 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:49.263 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.111 233728 DEBUG nova.network.neutron [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated VIF entry in instance network info cache for port adedd23a-8a64-4167-a4ad-7d586b717068. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.112 233728 DEBUG nova.network.neutron [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.127 233728 DEBUG oslo_concurrency.lockutils [req-83f1958f-dfe6-4bda-b53f-36f51c847a93 req-e0e19b57-0b3e-4b03-b8f6-00435374abf1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.211 233728 DEBUG nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.212 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.212 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.212 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.212 233728 DEBUG nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.212 233728 WARNING nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received unexpected event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.213 233728 DEBUG nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-unplugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.213 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.213 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.213 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.213 233728 DEBUG nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-unplugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.214 233728 WARNING nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received unexpected event network-vif-unplugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.214 233728 DEBUG nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.214 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.214 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.214 233728 DEBUG oslo_concurrency.lockutils [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.215 233728 DEBUG nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.215 233728 WARNING nova.compute.manager [req-10d61fc7-4dcd-4eda-8dcd-d9973e2ed53d req-5a755cf3-6518-423a-bb65-ec08ac1bfa36 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received unexpected event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:50 np0005539552 nova_compute[233724]: 2025-11-29 08:12:50.865 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 03:12:51 np0005539552 nova_compute[233724]: 2025-11-29 08:12:51.029 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:51.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:51 np0005539552 nova_compute[233724]: 2025-11-29 08:12:51.598 233728 DEBUG oslo_concurrency.lockutils [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:51 np0005539552 nova_compute[233724]: 2025-11-29 08:12:51.599 233728 DEBUG oslo_concurrency.lockutils [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:51 np0005539552 nova_compute[233724]: 2025-11-29 08:12:51.599 233728 DEBUG nova.network.neutron [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 03:12:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 03:12:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:53.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:53 np0005539552 nova_compute[233724]: 2025-11-29 08:12:53.283 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:53.283 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:12:53.285 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:12:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:53 np0005539552 podman[268578]: 2025-11-29 08:12:53.976125374 +0000 UTC m=+0.066504856 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:12:53 np0005539552 podman[268579]: 2025-11-29 08:12:53.985834616 +0000 UTC m=+0.074927753 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:12:53 np0005539552 podman[268580]: 2025-11-29 08:12:53.996924926 +0000 UTC m=+0.081466470 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.141 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.230 233728 INFO nova.network.neutron [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Port adedd23a-8a64-4167-a4ad-7d586b717068 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.231 233728 DEBUG nova.network.neutron [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.253 233728 DEBUG oslo_concurrency.lockutils [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.286 233728 DEBUG oslo_concurrency.lockutils [None req-82100ab0-3302-4191-b6ca-30e0e022c1c2 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-f2279c6c-774a-44e1-854b-d9aae353330e-adedd23a-8a64-4167-a4ad-7d586b717068" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.637 233728 DEBUG nova.compute.manager [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.638 233728 DEBUG nova.compute.manager [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing instance network info cache due to event network-changed-e7660cdd-1f88-4458-9388-8fb207f0a754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.638 233728 DEBUG oslo_concurrency.lockutils [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.638 233728 DEBUG oslo_concurrency.lockutils [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:54 np0005539552 nova_compute[233724]: 2025-11-29 08:12:54.638 233728 DEBUG nova.network.neutron [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Refreshing network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:55.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:55 np0005539552 nova_compute[233724]: 2025-11-29 08:12:55.364 233728 DEBUG oslo_concurrency.lockutils [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-adedd23a-8a64-4167-a4ad-7d586b717068" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:55 np0005539552 nova_compute[233724]: 2025-11-29 08:12:55.365 233728 DEBUG oslo_concurrency.lockutils [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-adedd23a-8a64-4167-a4ad-7d586b717068" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:55 np0005539552 nova_compute[233724]: 2025-11-29 08:12:55.365 233728 DEBUG nova.objects.instance [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:55 np0005539552 nova_compute[233724]: 2025-11-29 08:12:55.867 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.662 233728 DEBUG nova.objects.instance [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_requests' on Instance uuid 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.681 233728 DEBUG nova.network.neutron [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.891 233728 DEBUG nova.compute.manager [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.891 233728 DEBUG nova.compute.manager [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing instance network info cache due to event network-changed-834e7d4e-9813-48de-88d0-2da712aaa996. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.892 233728 DEBUG oslo_concurrency.lockutils [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.892 233728 DEBUG oslo_concurrency.lockutils [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:56 np0005539552 nova_compute[233724]: 2025-11-29 08:12:56.893 233728 DEBUG nova.network.neutron [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:57.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:57 np0005539552 nova_compute[233724]: 2025-11-29 08:12:57.401 233728 DEBUG nova.policy [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:57.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:57 np0005539552 nova_compute[233724]: 2025-11-29 08:12:57.630 233728 DEBUG nova.network.neutron [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updated VIF entry in instance network info cache for port e7660cdd-1f88-4458-9388-8fb207f0a754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:57 np0005539552 nova_compute[233724]: 2025-11-29 08:12:57.631 233728 DEBUG nova.network.neutron [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [{"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:57 np0005539552 nova_compute[233724]: 2025-11-29 08:12:57.656 233728 DEBUG oslo_concurrency.lockutils [req-109945a3-0553-4721-92bb-3686b0e731c5 req-2a9a4925-a6b4-44a3-b7f1-787ead46babe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f2279c6c-774a-44e1-854b-d9aae353330e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 03:12:58 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 29 03:12:58 np0005539552 nova_compute[233724]: 2025-11-29 08:12:58.400 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:58 np0005539552 nova_compute[233724]: 2025-11-29 08:12:58.470 233728 DEBUG nova.network.neutron [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updated VIF entry in instance network info cache for port 834e7d4e-9813-48de-88d0-2da712aaa996. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:58 np0005539552 nova_compute[233724]: 2025-11-29 08:12:58.470 233728 DEBUG nova.network.neutron [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:58 np0005539552 nova_compute[233724]: 2025-11-29 08:12:58.498 233728 DEBUG oslo_concurrency.lockutils [req-4ff073e0-df5b-4848-b01b-27294039051d req-99b474f5-f9eb-4302-a80a-5f7972e18a7c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:59.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:59 np0005539552 nova_compute[233724]: 2025-11-29 08:12:59.142 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 03:12:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:12:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:59 np0005539552 nova_compute[233724]: 2025-11-29 08:12:59.933 233728 DEBUG nova.network.neutron [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Successfully updated port: adedd23a-8a64-4167-a4ad-7d586b717068 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:59 np0005539552 nova_compute[233724]: 2025-11-29 08:12:59.960 233728 DEBUG oslo_concurrency.lockutils [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:59 np0005539552 nova_compute[233724]: 2025-11-29 08:12:59.961 233728 DEBUG oslo_concurrency.lockutils [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:59 np0005539552 nova_compute[233724]: 2025-11-29 08:12:59.961 233728 DEBUG nova.network.neutron [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:00 np0005539552 nova_compute[233724]: 2025-11-29 08:13:00.062 233728 DEBUG nova.compute.manager [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-changed-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:00 np0005539552 nova_compute[233724]: 2025-11-29 08:13:00.062 233728 DEBUG nova.compute.manager [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing instance network info cache due to event network-changed-adedd23a-8a64-4167-a4ad-7d586b717068. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:00 np0005539552 nova_compute[233724]: 2025-11-29 08:13:00.062 233728 DEBUG oslo_concurrency.lockutils [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:00 np0005539552 nova_compute[233724]: 2025-11-29 08:13:00.262 233728 WARNING nova.network.neutron [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:13:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 03:13:00 np0005539552 nova_compute[233724]: 2025-11-29 08:13:00.870 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:01.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:01.287 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:01.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 03:13:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:02Z|00276|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.130 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.668 233728 DEBUG nova.network.neutron [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.727 233728 DEBUG oslo_concurrency.lockutils [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.728 233728 DEBUG oslo_concurrency.lockutils [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.728 233728 DEBUG nova.network.neutron [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Refreshing network info cache for port adedd23a-8a64-4167-a4ad-7d586b717068 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.731 233728 DEBUG nova.virt.libvirt.vif [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.731 233728 DEBUG nova.network.os_vif_util [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.732 233728 DEBUG nova.network.os_vif_util [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.732 233728 DEBUG os_vif [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.733 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.733 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.733 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.736 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.736 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadedd23a-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.737 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapadedd23a-8a, col_values=(('external_ids', {'iface-id': 'adedd23a-8a64-4167-a4ad-7d586b717068', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:12:77', 'vm-uuid': '7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.738 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 NetworkManager[48926]: <info>  [1764403982.7394] manager: (tapadedd23a-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.739 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.745 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.746 233728 INFO os_vif [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a')#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.746 233728 DEBUG nova.virt.libvirt.vif [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.747 233728 DEBUG nova.network.os_vif_util [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.747 233728 DEBUG nova.network.os_vif_util [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.751 233728 DEBUG nova.virt.libvirt.guest [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:48:12:77"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <target dev="tapadedd23a-8a"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:13:02 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:13:02 np0005539552 kernel: tapadedd23a-8a: entered promiscuous mode
Nov 29 03:13:02 np0005539552 NetworkManager[48926]: <info>  [1764403982.7693] manager: (tapadedd23a-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 03:13:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:02Z|00277|binding|INFO|Claiming lport adedd23a-8a64-4167-a4ad-7d586b717068 for this chassis.
Nov 29 03:13:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:02Z|00278|binding|INFO|adedd23a-8a64-4167-a4ad-7d586b717068: Claiming fa:16:3e:48:12:77 10.100.0.10
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.771 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.778 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:12:77 10.100.0.10'], port_security=['fa:16:3e:48:12:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=adedd23a-8a64-4167-a4ad-7d586b717068) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.780 143400 INFO neutron.agent.ovn.metadata.agent [-] Port adedd23a-8a64-4167-a4ad-7d586b717068 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.782 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:13:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:02Z|00279|binding|INFO|Setting lport adedd23a-8a64-4167-a4ad-7d586b717068 ovn-installed in OVS
Nov 29 03:13:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:02Z|00280|binding|INFO|Setting lport adedd23a-8a64-4167-a4ad-7d586b717068 up in Southbound
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.788 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.793 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.802 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4334953c-60ce-4eb1-bf69-d654c6c176f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:02 np0005539552 systemd-udevd[268705]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:02 np0005539552 NetworkManager[48926]: <info>  [1764403982.8216] device (tapadedd23a-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:02 np0005539552 NetworkManager[48926]: <info>  [1764403982.8228] device (tapadedd23a-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.832 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[79f50d06-05de-4c3f-b226-c033c99fe3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.836 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e792aa-16d7-499f-b80a-c005787a0574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.864 233728 DEBUG nova.virt.libvirt.driver [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.864 233728 DEBUG nova.virt.libvirt.driver [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.865 233728 DEBUG nova.virt.libvirt.driver [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:a8:eb:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.865 233728 DEBUG nova.virt.libvirt.driver [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:48:12:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.865 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[15632b34-70c7-4b79-8fa7-bfa4ba02068d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.881 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ef30afb6-c822-4e41-8736-534163e669f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268712, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.895 233728 DEBUG nova.virt.libvirt.guest [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1879739874</nova:name>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:13:02</nova:creationTime>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:port uuid="834e7d4e-9813-48de-88d0-2da712aaa996">
Nov 29 03:13:02 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    <nova:port uuid="adedd23a-8a64-4167-a4ad-7d586b717068">
Nov 29 03:13:02 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:02 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:13:02 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:13:02 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.896 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eca743bc-336d-42d0-b223-d7926abe38e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697149, 'tstamp': 697149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268713, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697151, 'tstamp': 697151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268713, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.899 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.901 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.902 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.902 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.902 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:02.903 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:02 np0005539552 nova_compute[233724]: 2025-11-29 08:13:02.921 233728 DEBUG oslo_concurrency.lockutils [None req-12a3b353-a176-4c89-b667-da3642203ac0 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-adedd23a-8a64-4167-a4ad-7d586b717068" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:13:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:13:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:03.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:03 np0005539552 nova_compute[233724]: 2025-11-29 08:13:03.603 233728 DEBUG nova.compute.manager [req-03ee3d96-a3bd-4cdd-8a66-59c72017d1f4 req-e5c741f3-eecf-4a16-93d9-14ca74913bf2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:03 np0005539552 nova_compute[233724]: 2025-11-29 08:13:03.603 233728 DEBUG oslo_concurrency.lockutils [req-03ee3d96-a3bd-4cdd-8a66-59c72017d1f4 req-e5c741f3-eecf-4a16-93d9-14ca74913bf2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:03 np0005539552 nova_compute[233724]: 2025-11-29 08:13:03.604 233728 DEBUG oslo_concurrency.lockutils [req-03ee3d96-a3bd-4cdd-8a66-59c72017d1f4 req-e5c741f3-eecf-4a16-93d9-14ca74913bf2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:03 np0005539552 nova_compute[233724]: 2025-11-29 08:13:03.604 233728 DEBUG oslo_concurrency.lockutils [req-03ee3d96-a3bd-4cdd-8a66-59c72017d1f4 req-e5c741f3-eecf-4a16-93d9-14ca74913bf2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:03 np0005539552 nova_compute[233724]: 2025-11-29 08:13:03.604 233728 DEBUG nova.compute.manager [req-03ee3d96-a3bd-4cdd-8a66-59c72017d1f4 req-e5c741f3-eecf-4a16-93d9-14ca74913bf2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:03 np0005539552 nova_compute[233724]: 2025-11-29 08:13:03.604 233728 WARNING nova.compute.manager [req-03ee3d96-a3bd-4cdd-8a66-59c72017d1f4 req-e5c741f3-eecf-4a16-93d9-14ca74913bf2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received unexpected event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:04Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:12:77 10.100.0.10
Nov 29 03:13:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:04Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:12:77 10.100.0.10
Nov 29 03:13:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.111 233728 DEBUG oslo_concurrency.lockutils [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-adedd23a-8a64-4167-a4ad-7d586b717068" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.112 233728 DEBUG oslo_concurrency.lockutils [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-adedd23a-8a64-4167-a4ad-7d586b717068" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.130 233728 DEBUG nova.objects.instance [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.155 233728 DEBUG nova.virt.libvirt.vif [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.156 233728 DEBUG nova.network.os_vif_util [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.156 233728 DEBUG nova.network.os_vif_util [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.160 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.163 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.165 233728 DEBUG nova.virt.libvirt.driver [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Attempting to detach device tapadedd23a-8a from instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.166 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:48:12:77"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <target dev="tapadedd23a-8a"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.175 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.179 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface>not found in domain: <domain type='kvm' id='32'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <name>instance-00000056</name>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <uuid>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</uuid>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1879739874</nova:name>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:13:02</nova:creationTime>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:port uuid="834e7d4e-9813-48de-88d0-2da712aaa996">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:port uuid="adedd23a-8a64-4167-a4ad-7d586b717068">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='serial'>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='uuid'>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk' index='2'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config' index='1'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:a8:eb:8c'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='tap834e7d4e-98'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:48:12:77'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='tapadedd23a-8a'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='net1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/console.log' append='off'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/console.log' append='off'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c13,c74</label>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c13,c74</imagelabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.179 233728 INFO nova.virt.libvirt.driver [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapadedd23a-8a from instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b from the persistent domain config.#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.180 233728 DEBUG nova.virt.libvirt.driver [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] (1/8): Attempting to detach device tapadedd23a-8a with device alias net1 from instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.180 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:48:12:77"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <target dev="tapadedd23a-8a"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:13:05 np0005539552 kernel: tapadedd23a-8a (unregistering): left promiscuous mode
Nov 29 03:13:05 np0005539552 NetworkManager[48926]: <info>  [1764403985.2909] device (tapadedd23a-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:13:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:05Z|00281|binding|INFO|Releasing lport adedd23a-8a64-4167-a4ad-7d586b717068 from this chassis (sb_readonly=0)
Nov 29 03:13:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:05Z|00282|binding|INFO|Setting lport adedd23a-8a64-4167-a4ad-7d586b717068 down in Southbound
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.293 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:05Z|00283|binding|INFO|Removing iface tapadedd23a-8a ovn-installed in OVS
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.297 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:12:77 10.100.0.10'], port_security=['fa:16:3e:48:12:77 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1164795283', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=adedd23a-8a64-4167-a4ad-7d586b717068) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.298 143400 INFO neutron.agent.ovn.metadata.agent [-] Port adedd23a-8a64-4167-a4ad-7d586b717068 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.300 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.302 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764403985.3022008, 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.306 233728 DEBUG nova.virt.libvirt.driver [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Start waiting for the detach event from libvirt for device tapadedd23a-8a with device alias net1 for instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.307 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.311 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.312 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:48:12:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapadedd23a-8a"/></interface>not found in domain: <domain type='kvm' id='32'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <name>instance-00000056</name>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <uuid>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</uuid>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1879739874</nova:name>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:13:02</nova:creationTime>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:port uuid="834e7d4e-9813-48de-88d0-2da712aaa996">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:port uuid="adedd23a-8a64-4167-a4ad-7d586b717068">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='serial'>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='uuid'>7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk' index='2'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_disk.config' index='1'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:a8:eb:8c'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target dev='tap834e7d4e-98'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/console.log' append='off'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b/console.log' append='off'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c13,c74</label>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c13,c74</imagelabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.312 233728 INFO nova.virt.libvirt.driver [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapadedd23a-8a from instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b from the live domain config.#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.313 233728 DEBUG nova.virt.libvirt.vif [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.314 233728 DEBUG nova.network.os_vif_util [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.314 233728 DEBUG nova.network.os_vif_util [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.315 233728 DEBUG os_vif [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.316 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e373819c-deca-4007-9d14-af9e5abb23d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.316 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.317 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadedd23a-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.320 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.322 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.325 233728 INFO os_vif [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a')#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.326 233728 DEBUG nova.virt.libvirt.guest [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:name>tempest-tempest.common.compute-instance-1879739874</nova:name>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:13:05</nova:creationTime>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    <nova:port uuid="834e7d4e-9813-48de-88d0-2da712aaa996">
Nov 29 03:13:05 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:13:05 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:13:05 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.347 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e1b390-284c-4ca5-9377-fd25d8e2ec0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.350 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[50b31e9c-bd6f-4ca8-a620-3c9856761f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.381 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[57cae095-d76d-4cf5-bb61-54e0f6f1648a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.399 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[238cea20-bbbc-4c54-a3da-925f900f28dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 866, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 866, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268725, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.414 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bb6afe-bde5-4109-b6a9-a6defeef50d1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697149, 'tstamp': 697149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268726, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697151, 'tstamp': 697151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268726, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.416 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.419 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.419 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.419 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.419 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:05.420 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.445 233728 DEBUG nova.network.neutron [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updated VIF entry in instance network info cache for port adedd23a-8a64-4167-a4ad-7d586b717068. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.445 233728 DEBUG nova.network.neutron [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.466 233728 DEBUG oslo_concurrency.lockutils [req-a7d05c68-10af-46bd-857c-97aa0d41507e req-79ad071b-304f-499b-ad87-5123acb85999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.705 233728 DEBUG nova.compute.manager [req-e7297c41-2b83-4717-a2d9-b263b4bd2413 req-5b10b12b-3471-447f-842d-549d73b32739 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.706 233728 DEBUG oslo_concurrency.lockutils [req-e7297c41-2b83-4717-a2d9-b263b4bd2413 req-5b10b12b-3471-447f-842d-549d73b32739 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.706 233728 DEBUG oslo_concurrency.lockutils [req-e7297c41-2b83-4717-a2d9-b263b4bd2413 req-5b10b12b-3471-447f-842d-549d73b32739 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.706 233728 DEBUG oslo_concurrency.lockutils [req-e7297c41-2b83-4717-a2d9-b263b4bd2413 req-5b10b12b-3471-447f-842d-549d73b32739 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.706 233728 DEBUG nova.compute.manager [req-e7297c41-2b83-4717-a2d9-b263b4bd2413 req-5b10b12b-3471-447f-842d-549d73b32739 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.707 233728 WARNING nova.compute.manager [req-e7297c41-2b83-4717-a2d9-b263b4bd2413 req-5b10b12b-3471-447f-842d-549d73b32739 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received unexpected event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:05 np0005539552 nova_compute[233724]: 2025-11-29 08:13:05.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.048 233728 DEBUG oslo_concurrency.lockutils [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.049 233728 DEBUG oslo_concurrency.lockutils [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.049 233728 DEBUG nova.network.neutron [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:07.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.808 233728 DEBUG nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-unplugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.808 233728 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.808 233728 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.809 233728 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.809 233728 DEBUG nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-unplugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.809 233728 WARNING nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received unexpected event network-vif-unplugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.809 233728 DEBUG nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.810 233728 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.810 233728 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.810 233728 DEBUG oslo_concurrency.lockutils [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.810 233728 DEBUG nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:07 np0005539552 nova_compute[233724]: 2025-11-29 08:13:07.810 233728 WARNING nova.compute.manager [req-bc4f6967-1f14-4888-a225-26d3c5f9bb0a req-f8eb64b0-5687-48dc-bd45-c7cec5389ed6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received unexpected event network-vif-plugged-adedd23a-8a64-4167-a4ad-7d586b717068 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 03:13:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 03:13:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:09.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.744 233728 INFO nova.network.neutron [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Port adedd23a-8a64-4167-a4ad-7d586b717068 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.745 233728 DEBUG nova.network.neutron [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [{"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.766 233728 DEBUG oslo_concurrency.lockutils [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.785 233728 DEBUG oslo_concurrency.lockutils [None req-e03d39ae-c60a-41ab-a0de-637c663edbee b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-adedd23a-8a64-4167-a4ad-7d586b717068" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.785 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.786 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.786 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.786 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.786 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.787 233728 INFO nova.compute.manager [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Terminating instance#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.788 233728 DEBUG nova.compute.manager [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:13:09 np0005539552 kernel: tap834e7d4e-98 (unregistering): left promiscuous mode
Nov 29 03:13:09 np0005539552 NetworkManager[48926]: <info>  [1764403989.8542] device (tap834e7d4e-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.861 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:09Z|00284|binding|INFO|Releasing lport 834e7d4e-9813-48de-88d0-2da712aaa996 from this chassis (sb_readonly=0)
Nov 29 03:13:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:09Z|00285|binding|INFO|Setting lport 834e7d4e-9813-48de-88d0-2da712aaa996 down in Southbound
Nov 29 03:13:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 03:13:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:09Z|00286|binding|INFO|Removing iface tap834e7d4e-98 ovn-installed in OVS
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.863 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.869 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:eb:8c 10.100.0.11'], port_security=['fa:16:3e:a8:eb:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ece3be3-d42e-475f-bdcb-f996b12e4880', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=834e7d4e-9813-48de-88d0-2da712aaa996) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.870 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 834e7d4e-9813-48de-88d0-2da712aaa996 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.872 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.887 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d12e3403-bef2-40cd-a7e9-b42c357de0b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:09 np0005539552 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 29 03:13:09 np0005539552 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000056.scope: Consumed 14.712s CPU time.
Nov 29 03:13:09 np0005539552 systemd-machined[196379]: Machine qemu-32-instance-00000056 terminated.
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.919 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[381436f9-201e-429e-8b94-3f0b03f8a97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.922 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bbdafd-e224-4b73-be2c-504a6a72bdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.949 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a3eb90dc-ebb8-4d8e-a59b-af4afe6badce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.964 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[43746d9e-c101-4115-9d91-e2fa2a2694f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 950, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 950, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697138, 'reachable_time': 22900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268742, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.979 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[29d809f2-10ef-4577-a2bc-e9473baccc37]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697149, 'tstamp': 697149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268743, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697151, 'tstamp': 697151}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268743, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.980 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539552 nova_compute[233724]: 2025-11-29 08:13:09.985 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.985 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.986 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.986 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:09.986 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.020 233728 INFO nova.virt.libvirt.driver [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Instance destroyed successfully.#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.021 233728 DEBUG nova.objects.instance [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'resources' on Instance uuid 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.034 233728 DEBUG nova.virt.libvirt.vif [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.034 233728 DEBUG nova.network.os_vif_util [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "834e7d4e-9813-48de-88d0-2da712aaa996", "address": "fa:16:3e:a8:eb:8c", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834e7d4e-98", "ovs_interfaceid": "834e7d4e-9813-48de-88d0-2da712aaa996", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.035 233728 DEBUG nova.network.os_vif_util [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.035 233728 DEBUG os_vif [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.036 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.036 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834e7d4e-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.039 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.041 233728 INFO os_vif [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:eb:8c,bridge_name='br-int',has_traffic_filtering=True,id=834e7d4e-9813-48de-88d0-2da712aaa996,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834e7d4e-98')#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.042 233728 DEBUG nova.virt.libvirt.vif [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1879739874',display_name='tempest-tempest.common.compute-instance-1879739874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1879739874',id=86,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-j2u2rbx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.042 233728 DEBUG nova.network.os_vif_util [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "adedd23a-8a64-4167-a4ad-7d586b717068", "address": "fa:16:3e:48:12:77", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadedd23a-8a", "ovs_interfaceid": "adedd23a-8a64-4167-a4ad-7d586b717068", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.042 233728 DEBUG nova.network.os_vif_util [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.043 233728 DEBUG os_vif [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.044 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.044 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadedd23a-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.044 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.045 233728 INFO os_vif [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:12:77,bridge_name='br-int',has_traffic_filtering=True,id=adedd23a-8a64-4167-a4ad-7d586b717068,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadedd23a-8a')#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.499 233728 INFO nova.virt.libvirt.driver [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Deleting instance files /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_del#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.500 233728 INFO nova.virt.libvirt.driver [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Deletion of /var/lib/nova/instances/7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b_del complete#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.618 233728 INFO nova.compute.manager [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.619 233728 DEBUG oslo.service.loopingcall [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.619 233728 DEBUG nova.compute.manager [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.619 233728 DEBUG nova.network.neutron [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:13:10 np0005539552 nova_compute[233724]: 2025-11-29 08:13:10.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:11.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:11 np0005539552 nova_compute[233724]: 2025-11-29 08:13:11.300 233728 DEBUG nova.compute.manager [req-cf963dd1-440c-432e-aba1-1531966fed40 req-c1484a46-deac-4e7c-b987-2577dcb58ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-unplugged-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:11 np0005539552 nova_compute[233724]: 2025-11-29 08:13:11.300 233728 DEBUG oslo_concurrency.lockutils [req-cf963dd1-440c-432e-aba1-1531966fed40 req-c1484a46-deac-4e7c-b987-2577dcb58ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:11 np0005539552 nova_compute[233724]: 2025-11-29 08:13:11.300 233728 DEBUG oslo_concurrency.lockutils [req-cf963dd1-440c-432e-aba1-1531966fed40 req-c1484a46-deac-4e7c-b987-2577dcb58ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:11 np0005539552 nova_compute[233724]: 2025-11-29 08:13:11.301 233728 DEBUG oslo_concurrency.lockutils [req-cf963dd1-440c-432e-aba1-1531966fed40 req-c1484a46-deac-4e7c-b987-2577dcb58ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:11 np0005539552 nova_compute[233724]: 2025-11-29 08:13:11.301 233728 DEBUG nova.compute.manager [req-cf963dd1-440c-432e-aba1-1531966fed40 req-c1484a46-deac-4e7c-b987-2577dcb58ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-unplugged-834e7d4e-9813-48de-88d0-2da712aaa996 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:11 np0005539552 nova_compute[233724]: 2025-11-29 08:13:11.301 233728 DEBUG nova.compute.manager [req-cf963dd1-440c-432e-aba1-1531966fed40 req-c1484a46-deac-4e7c-b987-2577dcb58ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-unplugged-834e7d4e-9813-48de-88d0-2da712aaa996 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:13:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.100 233728 DEBUG nova.network.neutron [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.418 233728 INFO nova.compute.manager [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Took 1.80 seconds to deallocate network for instance.#033[00m
Nov 29 03:13:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.513 233728 DEBUG nova.compute.manager [req-18b62569-8349-4d93-bd36-214a042aadf4 req-0d736a88-c941-4f39-9cb2-9af13fe584fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-deleted-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.514 233728 INFO nova.compute.manager [req-18b62569-8349-4d93-bd36-214a042aadf4 req-0d736a88-c941-4f39-9cb2-9af13fe584fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Neutron deleted interface 834e7d4e-9813-48de-88d0-2da712aaa996; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.514 233728 DEBUG nova.network.neutron [req-18b62569-8349-4d93-bd36-214a042aadf4 req-0d736a88-c941-4f39-9cb2-9af13fe584fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.931 233728 DEBUG nova.compute.manager [req-18b62569-8349-4d93-bd36-214a042aadf4 req-0d736a88-c941-4f39-9cb2-9af13fe584fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Detach interface failed, port_id=834e7d4e-9813-48de-88d0-2da712aaa996, reason: Instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.955 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:12 np0005539552 nova_compute[233724]: 2025-11-29 08:13:12.955 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.041 233728 DEBUG oslo_concurrency.processutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:13.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.441 233728 DEBUG nova.compute.manager [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.442 233728 DEBUG oslo_concurrency.lockutils [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.442 233728 DEBUG oslo_concurrency.lockutils [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.443 233728 DEBUG oslo_concurrency.lockutils [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.443 233728 DEBUG nova.compute.manager [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] No waiting events found dispatching network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.443 233728 WARNING nova.compute.manager [req-d1e03774-e00c-4ac4-b1f9-74c72dce81bb req-ac921d33-cf1f-4f78-9a3b-fae958f11dca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Received unexpected event network-vif-plugged-834e7d4e-9813-48de-88d0-2da712aaa996 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:13:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/873314839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.472 233728 DEBUG oslo_concurrency.processutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.478 233728 DEBUG nova.compute.provider_tree [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.496 233728 DEBUG nova.scheduler.client.report [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.532 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:13.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.558 233728 INFO nova.scheduler.client.report [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Deleted allocations for instance 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b#033[00m
Nov 29 03:13:13 np0005539552 nova_compute[233724]: 2025-11-29 08:13:13.670 233728 DEBUG oslo_concurrency.lockutils [None req-65b7571e-9584-4539-ba52-b5944511e2a9 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.400 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.401 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.401 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.401 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.402 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.403 233728 INFO nova.compute.manager [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Terminating instance#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.404 233728 DEBUG nova.compute.manager [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:13:14 np0005539552 kernel: tape7660cdd-1f (unregistering): left promiscuous mode
Nov 29 03:13:14 np0005539552 NetworkManager[48926]: <info>  [1764403994.4490] device (tape7660cdd-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:14Z|00287|binding|INFO|Releasing lport e7660cdd-1f88-4458-9388-8fb207f0a754 from this chassis (sb_readonly=0)
Nov 29 03:13:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:14Z|00288|binding|INFO|Setting lport e7660cdd-1f88-4458-9388-8fb207f0a754 down in Southbound
Nov 29 03:13:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:14Z|00289|binding|INFO|Removing iface tape7660cdd-1f ovn-installed in OVS
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.464 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:d3:cf 10.100.0.7'], port_security=['fa:16:3e:4b:d3:cf 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f2279c6c-774a-44e1-854b-d9aae353330e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ece3be3-d42e-475f-bdcb-f996b12e4880', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e7660cdd-1f88-4458-9388-8fb207f0a754) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.466 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e7660cdd-1f88-4458-9388-8fb207f0a754 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.467 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.467 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[825d2905-a18d-4de5-bc49-2ed224ead4a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.468 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 namespace which is not needed anymore#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.474 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 29 03:13:14 np0005539552 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000053.scope: Consumed 17.117s CPU time.
Nov 29 03:13:14 np0005539552 systemd-machined[196379]: Machine qemu-31-instance-00000053 terminated.
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.550 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [NOTICE]   (267699) : haproxy version is 2.8.14-c23fe91
Nov 29 03:13:14 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [NOTICE]   (267699) : path to executable is /usr/sbin/haproxy
Nov 29 03:13:14 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [WARNING]  (267699) : Exiting Master process...
Nov 29 03:13:14 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [ALERT]    (267699) : Current worker (267702) exited with code 143 (Terminated)
Nov 29 03:13:14 np0005539552 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[267694]: [WARNING]  (267699) : All workers exited. Exiting... (0)
Nov 29 03:13:14 np0005539552 systemd[1]: libpod-3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730.scope: Deactivated successfully.
Nov 29 03:13:14 np0005539552 podman[268870]: 2025-11-29 08:13:14.61816071 +0000 UTC m=+0.053013625 container died 3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.635 233728 INFO nova.virt.libvirt.driver [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Instance destroyed successfully.#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.636 233728 DEBUG nova.objects.instance [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'resources' on Instance uuid f2279c6c-774a-44e1-854b-d9aae353330e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730-userdata-shm.mount: Deactivated successfully.
Nov 29 03:13:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay-eb5d9b83ba443b69231d2adc990edd3452145e9a404b55e56312800f565dd408-merged.mount: Deactivated successfully.
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.659 233728 DEBUG nova.virt.libvirt.vif [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1979756702',display_name='tempest-tempest.common.compute-instance-1979756702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1979756702',id=83,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDEWXop04+uZLtvBsFcRaUa+Xwr7qm7TY0ta2r6fa1ieuV+DkD/NEV3POpVnY30H29RqMvmxXH2BxKbSYQ2SIJEBajPEkr5PX8Sjsh+mKC5/3A6DjsKy96CC8Vn8W5GT2A==',key_name='tempest-keypair-2040108904',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:07Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-c8i3k0lp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=f2279c6c-774a-44e1-854b-d9aae353330e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.661 233728 DEBUG nova.network.os_vif_util [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "e7660cdd-1f88-4458-9388-8fb207f0a754", "address": "fa:16:3e:4b:d3:cf", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7660cdd-1f", "ovs_interfaceid": "e7660cdd-1f88-4458-9388-8fb207f0a754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.662 233728 DEBUG nova.network.os_vif_util [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.662 233728 DEBUG os_vif [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.665 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.665 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7660cdd-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.670 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 podman[268870]: 2025-11-29 08:13:14.671693048 +0000 UTC m=+0.106545943 container cleanup 3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.673 233728 INFO os_vif [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:d3:cf,bridge_name='br-int',has_traffic_filtering=True,id=e7660cdd-1f88-4458-9388-8fb207f0a754,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7660cdd-1f')#033[00m
Nov 29 03:13:14 np0005539552 systemd[1]: libpod-conmon-3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730.scope: Deactivated successfully.
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.700 233728 DEBUG nova.compute.manager [req-b71f9754-7089-4aeb-9023-2304fde5989e req-3a2ccd12-dc16-4daf-96ad-56edf7ee4f64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-unplugged-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.701 233728 DEBUG oslo_concurrency.lockutils [req-b71f9754-7089-4aeb-9023-2304fde5989e req-3a2ccd12-dc16-4daf-96ad-56edf7ee4f64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.701 233728 DEBUG oslo_concurrency.lockutils [req-b71f9754-7089-4aeb-9023-2304fde5989e req-3a2ccd12-dc16-4daf-96ad-56edf7ee4f64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.702 233728 DEBUG oslo_concurrency.lockutils [req-b71f9754-7089-4aeb-9023-2304fde5989e req-3a2ccd12-dc16-4daf-96ad-56edf7ee4f64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.702 233728 DEBUG nova.compute.manager [req-b71f9754-7089-4aeb-9023-2304fde5989e req-3a2ccd12-dc16-4daf-96ad-56edf7ee4f64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-unplugged-e7660cdd-1f88-4458-9388-8fb207f0a754 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.702 233728 DEBUG nova.compute.manager [req-b71f9754-7089-4aeb-9023-2304fde5989e req-3a2ccd12-dc16-4daf-96ad-56edf7ee4f64 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-unplugged-e7660cdd-1f88-4458-9388-8fb207f0a754 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:13:14 np0005539552 podman[268914]: 2025-11-29 08:13:14.74619488 +0000 UTC m=+0.048860274 container remove 3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.754 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ec23f69b-6eb3-4fef-a265-aabb13a51912]: (4, ('Sat Nov 29 08:13:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 (3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730)\n3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730\nSat Nov 29 08:13:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 (3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730)\n3b0b881566bb03191151c01023dab891f4182c1e854d2e913ed95dc55ddc9730\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.756 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[02393405-b90c-417f-b8b1-0f1051140a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.758 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:14 np0005539552 kernel: tapddd8b166-70: left promiscuous mode
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.761 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 nova_compute[233724]: 2025-11-29 08:13:14.774 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.779 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fe73a25b-d40c-45e2-ba00-4b6c91b200ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.799 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0b019c89-df45-40de-9475-ad8f81a931e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.801 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34e0ad09-14c7-4d8e-b3cd-0e0a54c00af4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.816 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[46179af1-c78a-4917-8d5a-362083456ab0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697132, 'reachable_time': 29948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268946, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.819 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:13:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:14.819 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[cacc7b98-c1e2-46a1-8af5-a0d69ed9890b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:14 np0005539552 systemd[1]: run-netns-ovnmeta\x2dddd8b166\x2d79ec\x2d408d\x2db52c\x2d581ad9dd6cb8.mount: Deactivated successfully.
Nov 29 03:13:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:15.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.070 233728 INFO nova.virt.libvirt.driver [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Deleting instance files /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e_del#033[00m
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.071 233728 INFO nova.virt.libvirt.driver [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Deletion of /var/lib/nova/instances/f2279c6c-774a-44e1-854b-d9aae353330e_del complete#033[00m
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.133 233728 INFO nova.compute.manager [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.134 233728 DEBUG oslo.service.loopingcall [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.135 233728 DEBUG nova.compute.manager [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.135 233728 DEBUG nova.network.neutron [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:13:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:15.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:15 np0005539552 nova_compute[233724]: 2025-11-29 08:13:15.876 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.599 233728 DEBUG nova.network.neutron [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.621 233728 INFO nova.compute.manager [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Took 1.49 seconds to deallocate network for instance.#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.630 233728 DEBUG nova.compute.manager [req-06643f3d-cc78-44da-ae82-0b037182a7a3 req-d7432001-2399-41d1-8ca0-827b46633c5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-deleted-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.631 233728 INFO nova.compute.manager [req-06643f3d-cc78-44da-ae82-0b037182a7a3 req-d7432001-2399-41d1-8ca0-827b46633c5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Neutron deleted interface e7660cdd-1f88-4458-9388-8fb207f0a754; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.631 233728 DEBUG nova.network.neutron [req-06643f3d-cc78-44da-ae82-0b037182a7a3 req-d7432001-2399-41d1-8ca0-827b46633c5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.649 233728 DEBUG nova.compute.manager [req-06643f3d-cc78-44da-ae82-0b037182a7a3 req-d7432001-2399-41d1-8ca0-827b46633c5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Detach interface failed, port_id=e7660cdd-1f88-4458-9388-8fb207f0a754, reason: Instance f2279c6c-774a-44e1-854b-d9aae353330e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.677 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.678 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.793 233728 DEBUG oslo_concurrency.processutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.860 233728 DEBUG nova.compute.manager [req-29921d1b-32ab-4e2a-b658-62453f34a65a req-6d339401-566b-471d-b997-b67cea54b449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.861 233728 DEBUG oslo_concurrency.lockutils [req-29921d1b-32ab-4e2a-b658-62453f34a65a req-6d339401-566b-471d-b997-b67cea54b449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.861 233728 DEBUG oslo_concurrency.lockutils [req-29921d1b-32ab-4e2a-b658-62453f34a65a req-6d339401-566b-471d-b997-b67cea54b449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.861 233728 DEBUG oslo_concurrency.lockutils [req-29921d1b-32ab-4e2a-b658-62453f34a65a req-6d339401-566b-471d-b997-b67cea54b449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.861 233728 DEBUG nova.compute.manager [req-29921d1b-32ab-4e2a-b658-62453f34a65a req-6d339401-566b-471d-b997-b67cea54b449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] No waiting events found dispatching network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:16 np0005539552 nova_compute[233724]: 2025-11-29 08:13:16.862 233728 WARNING nova.compute.manager [req-29921d1b-32ab-4e2a-b658-62453f34a65a req-6d339401-566b-471d-b997-b67cea54b449 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Received unexpected event network-vif-plugged-e7660cdd-1f88-4458-9388-8fb207f0a754 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:13:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:17.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2227849991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:17 np0005539552 nova_compute[233724]: 2025-11-29 08:13:17.225 233728 DEBUG oslo_concurrency.processutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:17 np0005539552 nova_compute[233724]: 2025-11-29 08:13:17.232 233728 DEBUG nova.compute.provider_tree [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:17 np0005539552 nova_compute[233724]: 2025-11-29 08:13:17.247 233728 DEBUG nova.scheduler.client.report [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:17 np0005539552 nova_compute[233724]: 2025-11-29 08:13:17.269 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:17 np0005539552 nova_compute[233724]: 2025-11-29 08:13:17.303 233728 INFO nova.scheduler.client.report [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Deleted allocations for instance f2279c6c-774a-44e1-854b-d9aae353330e#033[00m
Nov 29 03:13:17 np0005539552 nova_compute[233724]: 2025-11-29 08:13:17.405 233728 DEBUG oslo_concurrency.lockutils [None req-bd1d910b-3281-4bd5-a28b-fe07292c3826 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f2279c6c-774a-44e1-854b-d9aae353330e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:17.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 03:13:18 np0005539552 nova_compute[233724]: 2025-11-29 08:13:18.104 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:19.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:19.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:19 np0005539552 nova_compute[233724]: 2025-11-29 08:13:19.669 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:20.621 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:20.621 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:20.621 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:20 np0005539552 nova_compute[233724]: 2025-11-29 08:13:20.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:21.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:21.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 03:13:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.950 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.950 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:22 np0005539552 nova_compute[233724]: 2025-11-29 08:13:22.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:23.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.181 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1165542326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.417 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:23.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.565 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.567 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4501MB free_disk=20.94269561767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.567 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.568 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.700 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.700 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:13:23 np0005539552 nova_compute[233724]: 2025-11-29 08:13:23.726 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1194693440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:24 np0005539552 nova_compute[233724]: 2025-11-29 08:13:24.155 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:24 np0005539552 nova_compute[233724]: 2025-11-29 08:13:24.161 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:24 np0005539552 nova_compute[233724]: 2025-11-29 08:13:24.173 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:24 np0005539552 nova_compute[233724]: 2025-11-29 08:13:24.204 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:13:24 np0005539552 nova_compute[233724]: 2025-11-29 08:13:24.205 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:24 np0005539552 nova_compute[233724]: 2025-11-29 08:13:24.672 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:24 np0005539552 podman[269022]: 2025-11-29 08:13:24.961304456 +0000 UTC m=+0.053257822 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:13:24 np0005539552 podman[269021]: 2025-11-29 08:13:24.97149839 +0000 UTC m=+0.066265702 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:13:24 np0005539552 podman[269023]: 2025-11-29 08:13:24.997587011 +0000 UTC m=+0.086625709 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 03:13:25 np0005539552 nova_compute[233724]: 2025-11-29 08:13:25.020 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403990.0184062, 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:25 np0005539552 nova_compute[233724]: 2025-11-29 08:13:25.020 233728 INFO nova.compute.manager [-] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:13:25 np0005539552 nova_compute[233724]: 2025-11-29 08:13:25.062 233728 DEBUG nova.compute.manager [None req-00e12d4f-1536-4bcd-b251-d9094a31d342 - - - - - -] [instance: 7ecfbab6-f5fd-4c7e-bee4-2b60c260ad4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:25.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:25.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:25 np0005539552 nova_compute[233724]: 2025-11-29 08:13:25.879 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:26 np0005539552 nova_compute[233724]: 2025-11-29 08:13:26.206 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:26 np0005539552 nova_compute[233724]: 2025-11-29 08:13:26.206 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:26 np0005539552 nova_compute[233724]: 2025-11-29 08:13:26.206 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:13:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 03:13:26 np0005539552 nova_compute[233724]: 2025-11-29 08:13:26.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:26 np0005539552 nova_compute[233724]: 2025-11-29 08:13:26.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:27.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:28 np0005539552 nova_compute[233724]: 2025-11-29 08:13:28.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:28 np0005539552 nova_compute[233724]: 2025-11-29 08:13:28.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:13:28 np0005539552 nova_compute[233724]: 2025-11-29 08:13:28.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:13:28 np0005539552 nova_compute[233724]: 2025-11-29 08:13:28.946 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:13:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:29.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:29 np0005539552 nova_compute[233724]: 2025-11-29 08:13:29.634 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403994.6331174, f2279c6c-774a-44e1-854b-d9aae353330e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:29 np0005539552 nova_compute[233724]: 2025-11-29 08:13:29.635 233728 INFO nova.compute.manager [-] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:13:29 np0005539552 nova_compute[233724]: 2025-11-29 08:13:29.654 233728 DEBUG nova.compute.manager [None req-8b87dc4e-ace9-42a9-b1ce-71073802d264 - - - - - -] [instance: f2279c6c-774a-44e1-854b-d9aae353330e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:29 np0005539552 nova_compute[233724]: 2025-11-29 08:13:29.719 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:29 np0005539552 nova_compute[233724]: 2025-11-29 08:13:29.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:30 np0005539552 nova_compute[233724]: 2025-11-29 08:13:30.880 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:30 np0005539552 nova_compute[233724]: 2025-11-29 08:13:30.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:30 np0005539552 nova_compute[233724]: 2025-11-29 08:13:30.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:31.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.768315) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011768388, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1437, "num_deletes": 258, "total_data_size": 2881376, "memory_usage": 2922200, "flush_reason": "Manual Compaction"}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011776753, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1270000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41044, "largest_seqno": 42475, "table_properties": {"data_size": 1264835, "index_size": 2497, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13733, "raw_average_key_size": 21, "raw_value_size": 1253550, "raw_average_value_size": 1974, "num_data_blocks": 109, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403917, "oldest_key_time": 1764403917, "file_creation_time": 1764404011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 8469 microseconds, and 4047 cpu microseconds.
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.776794) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1270000 bytes OK
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.776811) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777830) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777841) EVENT_LOG_v1 {"time_micros": 1764404011777838, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777857) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2874539, prev total WAL file size 2874539, number of live WAL files 2.
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.778644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1240KB)], [78(11MB)]
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011778695, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 13564088, "oldest_snapshot_seqno": -1}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 7295 keys, 10352443 bytes, temperature: kUnknown
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011848388, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 10352443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10304873, "index_size": 28258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188917, "raw_average_key_size": 25, "raw_value_size": 10175214, "raw_average_value_size": 1394, "num_data_blocks": 1114, "num_entries": 7295, "num_filter_entries": 7295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.848804) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10352443 bytes
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.850225) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.3 rd, 148.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.8) write-amplify(8.2) OK, records in: 7782, records dropped: 487 output_compression: NoCompression
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.850254) EVENT_LOG_v1 {"time_micros": 1764404011850241, "job": 48, "event": "compaction_finished", "compaction_time_micros": 69798, "compaction_time_cpu_micros": 26862, "output_level": 6, "num_output_files": 1, "total_output_size": 10352443, "num_input_records": 7782, "num_output_records": 7295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011850850, "job": 48, "event": "table_file_deletion", "file_number": 80}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011854806, "job": 48, "event": "table_file_deletion", "file_number": 78}
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.778574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.854868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.854873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.854875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.854909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:13:31.854911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:33.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:34 np0005539552 nova_compute[233724]: 2025-11-29 08:13:34.721 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:35.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.467 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.467 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.490 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:13:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:35.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.744 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.744 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.749 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.749 233728 INFO nova.compute.claims [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.890 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:35 np0005539552 nova_compute[233724]: 2025-11-29 08:13:35.922 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3541463808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.304 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.309 233728 DEBUG nova.compute.provider_tree [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.325 233728 DEBUG nova.scheduler.client.report [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.346 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.347 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.408 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.409 233728 DEBUG nova.network.neutron [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.454 233728 INFO nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.494 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.619 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.621 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.621 233728 INFO nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Creating image(s)#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.648 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.671 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.695 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.698 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.766 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.767 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.768 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.768 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.794 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:36 np0005539552 nova_compute[233724]: 2025-11-29 08:13:36.798 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.060 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:37.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.092 233728 DEBUG nova.policy [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05e59f4debd946ad9b7a4bac0e968bc6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.127 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] resizing rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.220 233728 DEBUG nova.objects.instance [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'migration_context' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.242 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.242 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Ensure instance console log exists: /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.243 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.243 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:37 np0005539552 nova_compute[233724]: 2025-11-29 08:13:37.244 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:38 np0005539552 nova_compute[233724]: 2025-11-29 08:13:38.111 233728 DEBUG nova.network.neutron [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Successfully created port: 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:38 np0005539552 nova_compute[233724]: 2025-11-29 08:13:38.879 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:38 np0005539552 nova_compute[233724]: 2025-11-29 08:13:38.880 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:38 np0005539552 nova_compute[233724]: 2025-11-29 08:13:38.900 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.103 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.104 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.111 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.111 233728 INFO nova.compute.claims [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.310 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.676 233728 DEBUG nova.network.neutron [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Successfully updated port: 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.724 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.729 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.730 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquired lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.730 233728 DEBUG nova.network.neutron [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/653891024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.780 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.786 233728 DEBUG nova.compute.provider_tree [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.818 233728 DEBUG nova.scheduler.client.report [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.876 233728 DEBUG nova.compute.manager [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-changed-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.877 233728 DEBUG nova.compute.manager [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Refreshing instance network info cache due to event network-changed-1d12c166-4cae-49ec-ab9b-149d65ceb0b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.878 233728 DEBUG oslo_concurrency.lockutils [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.887 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.888 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.976 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:39 np0005539552 nova_compute[233724]: 2025-11-29 08:13:39.976 233728 DEBUG nova.network.neutron [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.025 233728 INFO nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.057 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.205 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.207 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.207 233728 INFO nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Creating image(s)#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.241 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.271 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.300 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.304 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.340 233728 DEBUG nova.network.neutron [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.387 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.388 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.389 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.389 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.419 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.423 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c32e74e2-e74f-4877-8130-ad35d31bb992_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.450 233728 DEBUG nova.policy [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90573489491c4659ba4a8ccbd6b896a7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5f1f0d72cd0427a8cda48db244caf6c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.769 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c32e74e2-e74f-4877-8130-ad35d31bb992_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.838 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] resizing rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.948 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.957 233728 DEBUG nova.objects.instance [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'migration_context' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.973 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.974 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Ensure instance console log exists: /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.974 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.974 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:40 np0005539552 nova_compute[233724]: 2025-11-29 08:13:40.974 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:41.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.507 233728 DEBUG nova.network.neutron [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Successfully created port: 2dc39626-7aae-4e0c-a70b-e08d83c9788b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.519 233728 DEBUG nova.network.neutron [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Updating instance_info_cache with network_info: [{"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.591 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Releasing lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.591 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance network_info: |[{"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.591 233728 DEBUG oslo_concurrency.lockutils [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.592 233728 DEBUG nova.network.neutron [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Refreshing network info cache for port 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.595 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Start _get_guest_xml network_info=[{"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.599 233728 WARNING nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.605 233728 DEBUG nova.virt.libvirt.host [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.606 233728 DEBUG nova.virt.libvirt.host [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.613 233728 DEBUG nova.virt.libvirt.host [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.614 233728 DEBUG nova.virt.libvirt.host [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.615 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.615 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.616 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.616 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.616 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.616 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.617 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.617 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.617 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.617 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.617 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.618 233728 DEBUG nova.virt.hardware [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:42 np0005539552 nova_compute[233724]: 2025-11-29 08:13:42.621 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1008313351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.065 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.092 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.096 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/796236548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.542 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.544 233728 DEBUG nova.virt.libvirt.vif [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-607468552',display_name='tempest-ListServerFiltersTestJSON-instance-607468552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-607468552',id=90,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-angclfpm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:36Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=36048c92-5df2-425d-b12f-1ce0326cc6a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.544 233728 DEBUG nova.network.os_vif_util [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.545 233728 DEBUG nova.network.os_vif_util [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.546 233728 DEBUG nova.objects.instance [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:43.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.931 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <uuid>36048c92-5df2-425d-b12f-1ce0326cc6a1</uuid>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <name>instance-0000005a</name>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-607468552</nova:name>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:13:42</nova:creationTime>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:user uuid="05e59f4debd946ad9b7a4bac0e968bc6">tempest-ListServerFiltersTestJSON-825347861-project-member</nova:user>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:project uuid="17c0ff0fdeac43fc8fa0d7bedad67c34">tempest-ListServerFiltersTestJSON-825347861</nova:project>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <nova:port uuid="1d12c166-4cae-49ec-ab9b-149d65ceb0b6">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <entry name="serial">36048c92-5df2-425d-b12f-1ce0326cc6a1</entry>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <entry name="uuid">36048c92-5df2-425d-b12f-1ce0326cc6a1</entry>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/36048c92-5df2-425d-b12f-1ce0326cc6a1_disk">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b2:af:cf"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <target dev="tap1d12c166-4c"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/console.log" append="off"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:13:43 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:13:43 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:13:43 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:13:43 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.932 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Preparing to wait for external event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.933 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.933 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.933 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.934 233728 DEBUG nova.virt.libvirt.vif [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-607468552',display_name='tempest-ListServerFiltersTestJSON-instance-607468552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-607468552',id=90,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-angclfpm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:36Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=36048c92-5df2-425d-b12f-1ce0326cc6a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.934 233728 DEBUG nova.network.os_vif_util [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.935 233728 DEBUG nova.network.os_vif_util [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.935 233728 DEBUG os_vif [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.936 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.937 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.937 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.940 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d12c166-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.941 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d12c166-4c, col_values=(('external_ids', {'iface-id': '1d12c166-4cae-49ec-ab9b-149d65ceb0b6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:af:cf', 'vm-uuid': '36048c92-5df2-425d-b12f-1ce0326cc6a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:43 np0005539552 NetworkManager[48926]: <info>  [1764404023.9437] manager: (tap1d12c166-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.945 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.949 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:43 np0005539552 nova_compute[233724]: 2025-11-29 08:13:43.950 233728 INFO os_vif [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c')#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.016 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.017 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.017 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No VIF found with MAC fa:16:3e:b2:af:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.017 233728 INFO nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Using config drive#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.046 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.630 233728 INFO nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Creating config drive at /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/disk.config#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.636 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcn5d9ue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.748 233728 DEBUG nova.network.neutron [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Successfully updated port: 2dc39626-7aae-4e0c-a70b-e08d83c9788b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.767 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.768 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquired lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.768 233728 DEBUG nova.network.neutron [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.769 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcn5d9ue" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.806 233728 DEBUG nova.storage.rbd_utils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.810 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/disk.config 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.893 233728 DEBUG nova.compute.manager [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-changed-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.894 233728 DEBUG nova.compute.manager [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Refreshing instance network info cache due to event network-changed-2dc39626-7aae-4e0c-a70b-e08d83c9788b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.894 233728 DEBUG oslo_concurrency.lockutils [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.995 233728 DEBUG oslo_concurrency.processutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/disk.config 36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:44 np0005539552 nova_compute[233724]: 2025-11-29 08:13:44.996 233728 INFO nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Deleting local config drive /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:45 np0005539552 kernel: tap1d12c166-4c: entered promiscuous mode
Nov 29 03:13:45 np0005539552 NetworkManager[48926]: <info>  [1764404025.0492] manager: (tap1d12c166-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 03:13:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:45Z|00290|binding|INFO|Claiming lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for this chassis.
Nov 29 03:13:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:45Z|00291|binding|INFO|1d12c166-4cae-49ec-ab9b-149d65ceb0b6: Claiming fa:16:3e:b2:af:cf 10.100.0.10
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.080 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539552 systemd-udevd[269784]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.085 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:45 np0005539552 NetworkManager[48926]: <info>  [1764404025.0957] device (tap1d12c166-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:45 np0005539552 NetworkManager[48926]: <info>  [1764404025.0969] device (tap1d12c166-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.099 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:af:cf 10.100.0.10'], port_security=['fa:16:3e:b2:af:cf 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '36048c92-5df2-425d-b12f-1ce0326cc6a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1d12c166-4cae-49ec-ab9b-149d65ceb0b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.100 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 bound to our chassis#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.102 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ce08321-9ca9-47d5-b99b-65a439440787#033[00m
Nov 29 03:13:45 np0005539552 systemd-machined[196379]: New machine qemu-33-instance-0000005a.
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.114 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8a947b39-4768-4f2e-b9f1-427d37d971f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.115 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ce08321-91 in ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.118 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ce08321-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.118 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5e75b9e3-db09-4ad6-9697-ab9f20037b0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.119 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa56d2f-174e-455a-aaa7-f4454c73c8df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:45Z|00292|binding|INFO|Setting lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 ovn-installed in OVS
Nov 29 03:13:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:45Z|00293|binding|INFO|Setting lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 up in Southbound
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.131 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539552 systemd[1]: Started Virtual Machine qemu-33-instance-0000005a.
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.134 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbb09c0-9127-446a-9853-587df775559e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.159 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4a941dc4-62de-4b47-bebd-6547c05fd87b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.165 233728 DEBUG nova.network.neutron [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.184 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1dec89-0247-4ee7-81b2-1e3adcd901df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.189 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c30861a8-fa6d-45c3-89f8-fb2ae66c99c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 NetworkManager[48926]: <info>  [1764404025.1900] manager: (tap5ce08321-90): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.217 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fee27def-95e1-42de-a699-725afa2179aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.222 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[96ba6d97-8c56-48c5-b837-4337b16d1947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 NetworkManager[48926]: <info>  [1764404025.2416] device (tap5ce08321-90): carrier: link connected
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.250 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[dad49717-1ceb-4a87-8433-88b11964285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.267 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e936e1c1-99d5-42a3-936d-c05a6ced0ec7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707092, 'reachable_time': 15820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269820, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.280 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b7366c-b7ee-4e84-84e8-064d3cb2cdca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:bc0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707092, 'tstamp': 707092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269821, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.301 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[35c5dba1-0747-475b-946b-44dae829b039]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707092, 'reachable_time': 15820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269822, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.331 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f4ac0e-4d85-4a61-93a2-24d3c3c7b5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.392 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[355f3568-0a5e-4e2d-a429-307a39e68f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.394 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.394 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.394 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ce08321-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:45 np0005539552 NetworkManager[48926]: <info>  [1764404025.3972] manager: (tap5ce08321-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 29 03:13:45 np0005539552 kernel: tap5ce08321-90: entered promiscuous mode
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.400 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ce08321-90, col_values=(('external_ids', {'iface-id': 'fb53c57a-d19f-4391-add7-afa34095fb59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.400 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:45Z|00294|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.428 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.429 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.432 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d140a48e-2605-4299-b457-38d9b74671b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.433 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:45.433 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'env', 'PROCESS_TAG=haproxy-5ce08321-9ca9-47d5-b99b-65a439440787', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ce08321-9ca9-47d5-b99b-65a439440787.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.569 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404025.5686746, 36048c92-5df2-425d-b12f-1ce0326cc6a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.570 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:45.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:45 np0005539552 podman[269896]: 2025-11-29 08:13:45.78434894 +0000 UTC m=+0.026444992 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:45 np0005539552 nova_compute[233724]: 2025-11-29 08:13:45.925 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:46 np0005539552 podman[269896]: 2025-11-29 08:13:46.248924801 +0000 UTC m=+0.491020823 container create 216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:46 np0005539552 systemd[1]: Started libpod-conmon-216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da.scope.
Nov 29 03:13:46 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:13:46 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2786c71855609f7f27dde0de29d5d396ff0edf5df289028d563f955a7030f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539552 podman[269896]: 2025-11-29 08:13:46.367298392 +0000 UTC m=+0.609394444 container init 216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:46 np0005539552 podman[269896]: 2025-11-29 08:13:46.373343154 +0000 UTC m=+0.615439176 container start 216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:46 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [NOTICE]   (269916) : New worker (269918) forked
Nov 29 03:13:46 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [NOTICE]   (269916) : Loading success.
Nov 29 03:13:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:47 np0005539552 nova_compute[233724]: 2025-11-29 08:13:47.243 233728 DEBUG nova.network.neutron [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Updated VIF entry in instance network info cache for port 1d12c166-4cae-49ec-ab9b-149d65ceb0b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:47 np0005539552 nova_compute[233724]: 2025-11-29 08:13:47.245 233728 DEBUG nova.network.neutron [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Updating instance_info_cache with network_info: [{"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:47.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.616 233728 DEBUG nova.compute.manager [req-15b60eab-12df-4f0c-beb0-5ed1755fd9af req-665c9ea2-0b65-41b8-b371-67393c68abbe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.616 233728 DEBUG oslo_concurrency.lockutils [req-15b60eab-12df-4f0c-beb0-5ed1755fd9af req-665c9ea2-0b65-41b8-b371-67393c68abbe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.617 233728 DEBUG oslo_concurrency.lockutils [req-15b60eab-12df-4f0c-beb0-5ed1755fd9af req-665c9ea2-0b65-41b8-b371-67393c68abbe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.617 233728 DEBUG oslo_concurrency.lockutils [req-15b60eab-12df-4f0c-beb0-5ed1755fd9af req-665c9ea2-0b65-41b8-b371-67393c68abbe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.617 233728 DEBUG nova.compute.manager [req-15b60eab-12df-4f0c-beb0-5ed1755fd9af req-665c9ea2-0b65-41b8-b371-67393c68abbe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Processing event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.618 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.623 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.627 233728 INFO nova.virt.libvirt.driver [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance spawned successfully.#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.628 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.631 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.637 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.655 233728 DEBUG oslo_concurrency.lockutils [req-25b496b7-0267-4e7c-8742-1596f703008d req-5376da05-d95d-4ed0-8439-30e8ce02a830 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.665 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.665 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.666 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.666 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.667 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.667 233728 DEBUG nova.virt.libvirt.driver [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.672 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.673 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404025.56896, 36048c92-5df2-425d-b12f-1ce0326cc6a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.673 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.706 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.712 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404028.6226377, 36048c92-5df2-425d-b12f-1ce0326cc6a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.712 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.751 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.755 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.768 233728 INFO nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Took 12.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.768 233728 DEBUG nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.816 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.854 233728 INFO nova.compute.manager [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Took 13.14 seconds to build instance.#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.896 233728 DEBUG oslo_concurrency.lockutils [None req-3335ae28-603e-45ae-b095-b86f50f5338c 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:48 np0005539552 nova_compute[233724]: 2025-11-29 08:13:48.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:49.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:49 np0005539552 nova_compute[233724]: 2025-11-29 08:13:49.341 233728 DEBUG nova.network.neutron [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:50 np0005539552 nova_compute[233724]: 2025-11-29 08:13:50.926 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.625 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Releasing lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.626 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Instance network_info: |[{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.626 233728 DEBUG oslo_concurrency.lockutils [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.626 233728 DEBUG nova.network.neutron [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Refreshing network info cache for port 2dc39626-7aae-4e0c-a70b-e08d83c9788b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.629 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Start _get_guest_xml network_info=[{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.632 233728 WARNING nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.637 233728 DEBUG nova.virt.libvirt.host [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.638 233728 DEBUG nova.virt.libvirt.host [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.640 233728 DEBUG nova.virt.libvirt.host [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.641 233728 DEBUG nova.virt.libvirt.host [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.642 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.642 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.642 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.643 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.643 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.643 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.643 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.643 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.644 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.644 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.644 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.644 233728 DEBUG nova.virt.hardware [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:51 np0005539552 nova_compute[233724]: 2025-11-29 08:13:51.647 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/128428089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.091 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.119 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.124 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3973484035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.566 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.567 233728 DEBUG nova.virt.libvirt.vif [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.568 233728 DEBUG nova.network.os_vif_util [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.568 233728 DEBUG nova.network.os_vif_util [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:52 np0005539552 nova_compute[233724]: 2025-11-29 08:13:52.569 233728 DEBUG nova.objects.instance [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'pci_devices' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.491 233728 DEBUG nova.compute.manager [req-dcdb5e2e-894f-47b8-8a21-a1914d7fbe0b req-224976c7-71f6-47ea-92b5-f9ec38ebde52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.492 233728 DEBUG oslo_concurrency.lockutils [req-dcdb5e2e-894f-47b8-8a21-a1914d7fbe0b req-224976c7-71f6-47ea-92b5-f9ec38ebde52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.493 233728 DEBUG oslo_concurrency.lockutils [req-dcdb5e2e-894f-47b8-8a21-a1914d7fbe0b req-224976c7-71f6-47ea-92b5-f9ec38ebde52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.494 233728 DEBUG oslo_concurrency.lockutils [req-dcdb5e2e-894f-47b8-8a21-a1914d7fbe0b req-224976c7-71f6-47ea-92b5-f9ec38ebde52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.494 233728 DEBUG nova.compute.manager [req-dcdb5e2e-894f-47b8-8a21-a1914d7fbe0b req-224976c7-71f6-47ea-92b5-f9ec38ebde52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.494 233728 WARNING nova.compute.manager [req-dcdb5e2e-894f-47b8-8a21-a1914d7fbe0b req-224976c7-71f6-47ea-92b5-f9ec38ebde52 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received unexpected event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.500 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <uuid>c32e74e2-e74f-4877-8130-ad35d31bb992</uuid>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <name>instance-0000005c</name>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:13:51</nova:creationTime>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <entry name="serial">c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <entry name="uuid">c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:91:96:6a"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <target dev="tap2dc39626-7a"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log" append="off"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:13:53 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:13:53 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:13:53 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:13:53 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.502 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Preparing to wait for external event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.503 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.503 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.504 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.505 233728 DEBUG nova.virt.libvirt.vif [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.506 233728 DEBUG nova.network.os_vif_util [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.507 233728 DEBUG nova.network.os_vif_util [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.508 233728 DEBUG os_vif [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.510 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.511 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.521 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.521 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dc39626-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.522 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2dc39626-7a, col_values=(('external_ids', {'iface-id': '2dc39626-7aae-4e0c-a70b-e08d83c9788b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:96:6a', 'vm-uuid': 'c32e74e2-e74f-4877-8130-ad35d31bb992'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.524 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:53 np0005539552 NetworkManager[48926]: <info>  [1764404033.5255] manager: (tap2dc39626-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.528 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.531 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.533 233728 INFO os_vif [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a')#033[00m
Nov 29 03:13:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:53 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:53.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.616 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.618 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.618 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No VIF found with MAC fa:16:3e:91:96:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.619 233728 INFO nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Using config drive#033[00m
Nov 29 03:13:53 np0005539552 nova_compute[233724]: 2025-11-29 08:13:53.660 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.110 233728 INFO nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Creating config drive at /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/disk.config#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.115 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8t4wgwwf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.247 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8t4wgwwf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.284 233728 DEBUG nova.storage.rbd_utils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] rbd image c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.289 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/disk.config c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.368 233728 DEBUG nova.network.neutron [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updated VIF entry in instance network info cache for port 2dc39626-7aae-4e0c-a70b-e08d83c9788b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.368 233728 DEBUG nova.network.neutron [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.647 233728 DEBUG oslo_concurrency.lockutils [req-e8241704-b2f4-4234-84a6-cf84f9586fb0 req-401b4652-1d9e-4449-ac6f-9365074b8d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.687 233728 DEBUG oslo_concurrency.processutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/disk.config c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.687 233728 INFO nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Deleting local config drive /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:54 np0005539552 kernel: tap2dc39626-7a: entered promiscuous mode
Nov 29 03:13:54 np0005539552 NetworkManager[48926]: <info>  [1764404034.7343] manager: (tap2dc39626-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Nov 29 03:13:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:54Z|00295|binding|INFO|Claiming lport 2dc39626-7aae-4e0c-a70b-e08d83c9788b for this chassis.
Nov 29 03:13:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:54Z|00296|binding|INFO|2dc39626-7aae-4e0c-a70b-e08d83c9788b: Claiming fa:16:3e:91:96:6a 10.100.0.12
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.748 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.756 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:96:6a 10.100.0.12'], port_security=['fa:16:3e:91:96:6a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c32e74e2-e74f-4877-8130-ad35d31bb992', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5f1f0d72cd0427a8cda48db244caf6c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '748545ab-9e1c-469f-b9ff-83c86c5e92e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb550e5a-4fea-4245-9311-17a32f690e26, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2dc39626-7aae-4e0c-a70b-e08d83c9788b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.758 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2dc39626-7aae-4e0c-a70b-e08d83c9788b in datapath 88cc8f67-0d68-413a-b508-63fae18f1c0c bound to our chassis#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.760 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88cc8f67-0d68-413a-b508-63fae18f1c0c#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.775 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e34e33e7-b446-41cd-856a-7cb72e14475b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.777 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88cc8f67-01 in ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.781 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88cc8f67-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.781 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1db3d13d-69fc-4049-94eb-90e78765b6be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.783 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fd8429-afee-4219-b7b6-94cb05859d8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 systemd-machined[196379]: New machine qemu-34-instance-0000005c.
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.797 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[edb534e4-4e9b-494e-adb6-f2f45153ea5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 systemd[1]: Started Virtual Machine qemu-34-instance-0000005c.
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:54Z|00297|binding|INFO|Setting lport 2dc39626-7aae-4e0c-a70b-e08d83c9788b ovn-installed in OVS
Nov 29 03:13:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:54Z|00298|binding|INFO|Setting lport 2dc39626-7aae-4e0c-a70b-e08d83c9788b up in Southbound
Nov 29 03:13:54 np0005539552 nova_compute[233724]: 2025-11-29 08:13:54.820 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.822 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8886da98-0a37-41ae-831f-6d32af02b9d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 systemd-udevd[270176]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.861 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4e58a8b7-3fda-4a5d-84ec-557bf41ff104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.866 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[874082dc-6c40-4b73-9ee4-b104c8f42034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 NetworkManager[48926]: <info>  [1764404034.8679] manager: (tap88cc8f67-00): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Nov 29 03:13:54 np0005539552 systemd-udevd[270182]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:54 np0005539552 NetworkManager[48926]: <info>  [1764404034.8730] device (tap2dc39626-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:54 np0005539552 NetworkManager[48926]: <info>  [1764404034.8741] device (tap2dc39626-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.912 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b8b8c1-09f0-441e-9d5d-25d918874136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.916 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8f203190-39e7-4deb-8be8-5f13856371c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 NetworkManager[48926]: <info>  [1764404034.9393] device (tap88cc8f67-00): carrier: link connected
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.945 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e99e85ee-1d5e-4327-8284-ef1b5bc406f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.964 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3df823b4-babe-41f4-b987-c1dc305df33c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88cc8f67-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:04:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708062, 'reachable_time': 37310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270203, 'error': None, 'target': 'ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.981 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[511ebb4f-65ed-4d75-bfcc-64194046ca59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:400'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708062, 'tstamp': 708062}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270204, 'error': None, 'target': 'ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:54.997 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[58b0a25a-38df-4a2b-941a-f3a3fb5b0269]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88cc8f67-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:04:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708062, 'reachable_time': 37310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270205, 'error': None, 'target': 'ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.024 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bb14b343-0648-46ba-aaac-bdaa258c0366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.099 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[984e683e-670b-44ac-bac1-3e2551fa2b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.101 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88cc8f67-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.101 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.102 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88cc8f67-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:55 np0005539552 NetworkManager[48926]: <info>  [1764404035.1043] manager: (tap88cc8f67-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Nov 29 03:13:55 np0005539552 nova_compute[233724]: 2025-11-29 08:13:55.103 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539552 kernel: tap88cc8f67-00: entered promiscuous mode
Nov 29 03:13:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.107 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88cc8f67-00, col_values=(('external_ids', {'iface-id': '3c46c58e-3ab3-48f8-ac9f-00200acea5a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:13:55Z|00299|binding|INFO|Releasing lport 3c46c58e-3ab3-48f8-ac9f-00200acea5a6 from this chassis (sb_readonly=1)
Nov 29 03:13:55 np0005539552 nova_compute[233724]: 2025-11-29 08:13:55.108 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.111 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88cc8f67-0d68-413a-b508-63fae18f1c0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88cc8f67-0d68-413a-b508-63fae18f1c0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.113 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3f8ff5-eecd-425b-9b72-0e36f9ae1a27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.114 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-88cc8f67-0d68-413a-b508-63fae18f1c0c
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/88cc8f67-0d68-413a-b508-63fae18f1c0c.pid.haproxy
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 88cc8f67-0d68-413a-b508-63fae18f1c0c
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:55.114 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'env', 'PROCESS_TAG=haproxy-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88cc8f67-0d68-413a-b508-63fae18f1c0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:55 np0005539552 nova_compute[233724]: 2025-11-29 08:13:55.123 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539552 podman[270270]: 2025-11-29 08:13:55.562204007 +0000 UTC m=+0.078023167 container create b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:13:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:55.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:55 np0005539552 podman[270270]: 2025-11-29 08:13:55.511849644 +0000 UTC m=+0.027668834 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:55 np0005539552 systemd[1]: Started libpod-conmon-b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b.scope.
Nov 29 03:13:55 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:13:55 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40401ac65f88f73496c54958b7c9ab5eb1bed665149534e9f63c76fea2cdf48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:55 np0005539552 nova_compute[233724]: 2025-11-29 08:13:55.674 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404035.674243, c32e74e2-e74f-4877-8130-ad35d31bb992 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:55 np0005539552 nova_compute[233724]: 2025-11-29 08:13:55.675 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:55 np0005539552 nova_compute[233724]: 2025-11-29 08:13:55.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:55 np0005539552 podman[270270]: 2025-11-29 08:13:55.976126348 +0000 UTC m=+0.491945528 container init b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:13:55 np0005539552 podman[270270]: 2025-11-29 08:13:55.981701367 +0000 UTC m=+0.497520527 container start b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:13:56 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [NOTICE]   (270352) : New worker (270354) forked
Nov 29 03:13:56 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [NOTICE]   (270352) : Loading success.
Nov 29 03:13:56 np0005539552 podman[270288]: 2025-11-29 08:13:56.005888607 +0000 UTC m=+0.401774675 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:13:56 np0005539552 podman[270289]: 2025-11-29 08:13:56.028839604 +0000 UTC m=+0.421369982 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:13:56 np0005539552 podman[270290]: 2025-11-29 08:13:56.033103938 +0000 UTC m=+0.422247105 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:13:56 np0005539552 nova_compute[233724]: 2025-11-29 08:13:56.833 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:56 np0005539552 nova_compute[233724]: 2025-11-29 08:13:56.836 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404035.6766658, c32e74e2-e74f-4877-8130-ad35d31bb992 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:56 np0005539552 nova_compute[233724]: 2025-11-29 08:13:56.836 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:56 np0005539552 nova_compute[233724]: 2025-11-29 08:13:56.872 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:56 np0005539552 nova_compute[233724]: 2025-11-29 08:13:56.875 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:56 np0005539552 nova_compute[233724]: 2025-11-29 08:13:56.921 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.276 233728 DEBUG nova.compute.manager [req-50cec952-a78b-461e-87c7-a364525fb97f req-1c307b7d-4590-443e-ad50-c682381b5bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.277 233728 DEBUG oslo_concurrency.lockutils [req-50cec952-a78b-461e-87c7-a364525fb97f req-1c307b7d-4590-443e-ad50-c682381b5bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.278 233728 DEBUG oslo_concurrency.lockutils [req-50cec952-a78b-461e-87c7-a364525fb97f req-1c307b7d-4590-443e-ad50-c682381b5bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.278 233728 DEBUG oslo_concurrency.lockutils [req-50cec952-a78b-461e-87c7-a364525fb97f req-1c307b7d-4590-443e-ad50-c682381b5bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.278 233728 DEBUG nova.compute.manager [req-50cec952-a78b-461e-87c7-a364525fb97f req-1c307b7d-4590-443e-ad50-c682381b5bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Processing event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.279 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.283 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404037.282074, c32e74e2-e74f-4877-8130-ad35d31bb992 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.283 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.285 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.288 233728 INFO nova.virt.libvirt.driver [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Instance spawned successfully.#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.289 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.328 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.329 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.329 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.330 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.330 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.331 233728 DEBUG nova.virt.libvirt.driver [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.365 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.367 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:57 np0005539552 nova_compute[233724]: 2025-11-29 08:13:57.416 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:57.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:58 np0005539552 nova_compute[233724]: 2025-11-29 08:13:58.045 233728 INFO nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Took 17.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:13:58 np0005539552 nova_compute[233724]: 2025-11-29 08:13:58.045 233728 DEBUG nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:58 np0005539552 nova_compute[233724]: 2025-11-29 08:13:58.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:59 np0005539552 nova_compute[233724]: 2025-11-29 08:13:59.451 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:59.451 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:13:59.453 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:13:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:13:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:59.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:59 np0005539552 nova_compute[233724]: 2025-11-29 08:13:59.714 233728 INFO nova.compute.manager [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Took 20.66 seconds to build instance.#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.542 233728 DEBUG nova.compute.manager [req-5f3a75a1-b465-4146-b419-9638d89fcbc1 req-17e36eb2-efcc-4927-a893-e5603270f0d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.543 233728 DEBUG oslo_concurrency.lockutils [req-5f3a75a1-b465-4146-b419-9638d89fcbc1 req-17e36eb2-efcc-4927-a893-e5603270f0d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.543 233728 DEBUG oslo_concurrency.lockutils [req-5f3a75a1-b465-4146-b419-9638d89fcbc1 req-17e36eb2-efcc-4927-a893-e5603270f0d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.543 233728 DEBUG oslo_concurrency.lockutils [req-5f3a75a1-b465-4146-b419-9638d89fcbc1 req-17e36eb2-efcc-4927-a893-e5603270f0d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.543 233728 DEBUG nova.compute.manager [req-5f3a75a1-b465-4146-b419-9638d89fcbc1 req-17e36eb2-efcc-4927-a893-e5603270f0d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.544 233728 WARNING nova.compute.manager [req-5f3a75a1-b465-4146-b419-9638d89fcbc1 req-17e36eb2-efcc-4927-a893-e5603270f0d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received unexpected event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.618 233728 DEBUG oslo_concurrency.lockutils [None req-503dfe88-a2b9-4550-ba55-afa9985157b3 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:00 np0005539552 nova_compute[233724]: 2025-11-29 08:14:00.930 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:01.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:02 np0005539552 nova_compute[233724]: 2025-11-29 08:14:02.676 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539552 NetworkManager[48926]: <info>  [1764404042.6774] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Nov 29 03:14:02 np0005539552 NetworkManager[48926]: <info>  [1764404042.6788] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Nov 29 03:14:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:02Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:af:cf 10.100.0.10
Nov 29 03:14:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:02Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:af:cf 10.100.0.10
Nov 29 03:14:02 np0005539552 nova_compute[233724]: 2025-11-29 08:14:02.851 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:02Z|00300|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:14:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:02Z|00301|binding|INFO|Releasing lport 3c46c58e-3ab3-48f8-ac9f-00200acea5a6 from this chassis (sb_readonly=0)
Nov 29 03:14:02 np0005539552 nova_compute[233724]: 2025-11-29 08:14:02.929 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:03 np0005539552 nova_compute[233724]: 2025-11-29 08:14:03.529 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:03.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:03 np0005539552 nova_compute[233724]: 2025-11-29 08:14:03.626 233728 DEBUG nova.compute.manager [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-changed-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:03 np0005539552 nova_compute[233724]: 2025-11-29 08:14:03.627 233728 DEBUG nova.compute.manager [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Refreshing instance network info cache due to event network-changed-2dc39626-7aae-4e0c-a70b-e08d83c9788b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:03 np0005539552 nova_compute[233724]: 2025-11-29 08:14:03.627 233728 DEBUG oslo_concurrency.lockutils [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:03 np0005539552 nova_compute[233724]: 2025-11-29 08:14:03.627 233728 DEBUG oslo_concurrency.lockutils [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:03 np0005539552 nova_compute[233724]: 2025-11-29 08:14:03.627 233728 DEBUG nova.network.neutron [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Refreshing network info cache for port 2dc39626-7aae-4e0c-a70b-e08d83c9788b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:05 np0005539552 nova_compute[233724]: 2025-11-29 08:14:05.932 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:06 np0005539552 nova_compute[233724]: 2025-11-29 08:14:06.988 233728 DEBUG nova.network.neutron [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updated VIF entry in instance network info cache for port 2dc39626-7aae-4e0c-a70b-e08d83c9788b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:06 np0005539552 nova_compute[233724]: 2025-11-29 08:14:06.988 233728 DEBUG nova.network.neutron [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:07 np0005539552 nova_compute[233724]: 2025-11-29 08:14:07.011 233728 DEBUG oslo_concurrency.lockutils [req-beb3f9f0-6b9a-4ff2-a026-72a23b1bc9bb req-b07e8cb8-5f92-4c37-9da0-0e64932c49b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:07.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:07.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:08 np0005539552 nova_compute[233724]: 2025-11-29 08:14:08.532 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:09.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:09.455 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:09.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.190 233728 DEBUG nova.compute.manager [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.637 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.637 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.713 233728 DEBUG nova.objects.instance [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_requests' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.896 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.897 233728 INFO nova.compute.claims [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.897 233728 DEBUG nova.objects.instance [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.934 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:10 np0005539552 nova_compute[233724]: 2025-11-29 08:14:10.975 233728 DEBUG nova.objects.instance [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.064 233728 INFO nova.compute.resource_tracker [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating resource usage from migration 0444e4d7-f620-4d8e-910b-fc76bd03562e#033[00m
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.065 233728 DEBUG nova.compute.resource_tracker [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Starting to track incoming migration 0444e4d7-f620-4d8e-910b-fc76bd03562e with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:14:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:11.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.342 233728 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:11.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/548858972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.777 233728 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.782 233728 DEBUG nova.compute.provider_tree [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.854 233728 DEBUG nova.scheduler.client.report [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.915 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:11 np0005539552 nova_compute[233724]: 2025-11-29 08:14:11.916 233728 INFO nova.compute.manager [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Migrating#033[00m
Nov 29 03:14:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:13 np0005539552 nova_compute[233724]: 2025-11-29 08:14:13.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:13.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:13Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:96:6a 10.100.0.12
Nov 29 03:14:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:13Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:96:6a 10.100.0.12
Nov 29 03:14:14 np0005539552 systemd-logind[788]: New session 65 of user nova.
Nov 29 03:14:14 np0005539552 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:14:14 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:14:14 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:14:14 np0005539552 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:14:14 np0005539552 systemd[270403]: Queued start job for default target Main User Target.
Nov 29 03:14:15 np0005539552 systemd[270403]: Created slice User Application Slice.
Nov 29 03:14:15 np0005539552 systemd[270403]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:14:15 np0005539552 systemd[270403]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:14:15 np0005539552 systemd[270403]: Reached target Paths.
Nov 29 03:14:15 np0005539552 systemd[270403]: Reached target Timers.
Nov 29 03:14:15 np0005539552 systemd[270403]: Starting D-Bus User Message Bus Socket...
Nov 29 03:14:15 np0005539552 systemd[270403]: Starting Create User's Volatile Files and Directories...
Nov 29 03:14:15 np0005539552 systemd[270403]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:14:15 np0005539552 systemd[270403]: Reached target Sockets.
Nov 29 03:14:15 np0005539552 systemd[270403]: Finished Create User's Volatile Files and Directories.
Nov 29 03:14:15 np0005539552 systemd[270403]: Reached target Basic System.
Nov 29 03:14:15 np0005539552 systemd[270403]: Reached target Main User Target.
Nov 29 03:14:15 np0005539552 systemd[270403]: Startup finished in 170ms.
Nov 29 03:14:15 np0005539552 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:14:15 np0005539552 systemd[1]: Started Session 65 of User nova.
Nov 29 03:14:15 np0005539552 systemd[1]: session-65.scope: Deactivated successfully.
Nov 29 03:14:15 np0005539552 systemd-logind[788]: Session 65 logged out. Waiting for processes to exit.
Nov 29 03:14:15 np0005539552 systemd-logind[788]: Removed session 65.
Nov 29 03:14:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:15.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:15 np0005539552 systemd-logind[788]: New session 67 of user nova.
Nov 29 03:14:15 np0005539552 systemd[1]: Started Session 67 of User nova.
Nov 29 03:14:15 np0005539552 systemd[1]: session-67.scope: Deactivated successfully.
Nov 29 03:14:15 np0005539552 systemd-logind[788]: Session 67 logged out. Waiting for processes to exit.
Nov 29 03:14:15 np0005539552 systemd-logind[788]: Removed session 67.
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.482 233728 DEBUG oslo_concurrency.lockutils [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.483 233728 DEBUG oslo_concurrency.lockutils [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.483 233728 DEBUG nova.compute.manager [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.488 233728 DEBUG nova.compute.manager [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.489 233728 DEBUG nova.objects.instance [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'flavor' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.518 233728 DEBUG nova.virt.libvirt.driver [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:14:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:15.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:15 np0005539552 nova_compute[233724]: 2025-11-29 08:14:15.937 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:16 np0005539552 nova_compute[233724]: 2025-11-29 08:14:16.422 233728 INFO nova.network.neutron [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:14:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:17.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:17.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.255 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.255 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.255 233728 DEBUG nova.network.neutron [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.380 233728 DEBUG nova.compute.manager [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.381 233728 DEBUG nova.compute.manager [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing instance network info cache due to event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.381 233728 DEBUG oslo_concurrency.lockutils [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:18 np0005539552 kernel: tap1d12c166-4c (unregistering): left promiscuous mode
Nov 29 03:14:18 np0005539552 NetworkManager[48926]: <info>  [1764404058.5026] device (tap1d12c166-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.533 233728 INFO nova.virt.libvirt.driver [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.569 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:18Z|00302|binding|INFO|Releasing lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 from this chassis (sb_readonly=0)
Nov 29 03:14:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:18Z|00303|binding|INFO|Setting lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 down in Southbound
Nov 29 03:14:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:18Z|00304|binding|INFO|Removing iface tap1d12c166-4c ovn-installed in OVS
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.574 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.582 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:af:cf 10.100.0.10'], port_security=['fa:16:3e:b2:af:cf 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '36048c92-5df2-425d-b12f-1ce0326cc6a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1d12c166-4cae-49ec-ab9b-149d65ceb0b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.583 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 unbound from our chassis#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.584 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ce08321-9ca9-47d5-b99b-65a439440787, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.586 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb6c23f-a8c9-4f90-b45f-0037f95a825d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.587 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace which is not needed anymore#033[00m
Nov 29 03:14:18 np0005539552 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 29 03:14:18 np0005539552 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000005a.scope: Consumed 14.827s CPU time.
Nov 29 03:14:18 np0005539552 systemd-machined[196379]: Machine qemu-33-instance-0000005a terminated.
Nov 29 03:14:18 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [NOTICE]   (269916) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:18 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [NOTICE]   (269916) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:18 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [WARNING]  (269916) : Exiting Master process...
Nov 29 03:14:18 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [ALERT]    (269916) : Current worker (269918) exited with code 143 (Terminated)
Nov 29 03:14:18 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[269912]: [WARNING]  (269916) : All workers exited. Exiting... (0)
Nov 29 03:14:18 np0005539552 systemd[1]: libpod-216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da.scope: Deactivated successfully.
Nov 29 03:14:18 np0005539552 conmon[269912]: conmon 216d0cc04433d1ecff5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da.scope/container/memory.events
Nov 29 03:14:18 np0005539552 podman[270500]: 2025-11-29 08:14:18.714714858 +0000 UTC m=+0.042596495 container died 216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:14:18 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:18 np0005539552 systemd[1]: var-lib-containers-storage-overlay-cf2786c71855609f7f27dde0de29d5d396ff0edf5df289028d563f955a7030f4-merged.mount: Deactivated successfully.
Nov 29 03:14:18 np0005539552 podman[270500]: 2025-11-29 08:14:18.746969185 +0000 UTC m=+0.074850812 container cleanup 216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:14:18 np0005539552 systemd[1]: libpod-conmon-216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da.scope: Deactivated successfully.
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.803 233728 INFO nova.virt.libvirt.driver [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance destroyed successfully.#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.804 233728 DEBUG nova.objects.instance [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'numa_topology' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:18 np0005539552 podman[270531]: 2025-11-29 08:14:18.810655016 +0000 UTC m=+0.045861984 container remove 216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.817 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4ae145-c0bd-49a4-8256-bc82b83b5e7e]: (4, ('Sat Nov 29 08:14:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da)\n216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da\nSat Nov 29 08:14:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da)\n216d0cc04433d1ecff5ba44b69c88726d43b1f52f4d9bc1d0e3667e384c480da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.819 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8ba362-387e-44df-95d5-25b2fd967ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.820 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.821 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539552 kernel: tap5ce08321-90: left promiscuous mode
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.824 233728 DEBUG nova.compute.manager [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.839 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.841 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[de2b0f7b-862e-4497-bc86-ef9c92257213]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.852 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9669fb-f803-4b5d-b9fe-cea777b4125c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.853 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc544ab-4de5-46b4-8b47-7e32791e91e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.866 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e2316e5f-8b48-4c4d-88c1-69e8ee031107]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707086, 'reachable_time': 19496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270562, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 systemd[1]: run-netns-ovnmeta\x2d5ce08321\x2d9ca9\x2d47d5\x2db99b\x2d65a439440787.mount: Deactivated successfully.
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.869 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:18.869 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f2972dbb-5e39-4906-871c-6a4ba29fdff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539552 nova_compute[233724]: 2025-11-29 08:14:18.872 233728 DEBUG oslo_concurrency.lockutils [None req-50e55ea5-1602-4919-b540-42947f1f1649 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:19 np0005539552 nova_compute[233724]: 2025-11-29 08:14:19.153 233728 DEBUG nova.compute.manager [req-742e06c4-cfda-4642-b82c-841624e70247 req-d3683351-5843-46eb-b8cc-62647d376816 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-unplugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:19 np0005539552 nova_compute[233724]: 2025-11-29 08:14:19.154 233728 DEBUG oslo_concurrency.lockutils [req-742e06c4-cfda-4642-b82c-841624e70247 req-d3683351-5843-46eb-b8cc-62647d376816 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:19 np0005539552 nova_compute[233724]: 2025-11-29 08:14:19.154 233728 DEBUG oslo_concurrency.lockutils [req-742e06c4-cfda-4642-b82c-841624e70247 req-d3683351-5843-46eb-b8cc-62647d376816 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:19 np0005539552 nova_compute[233724]: 2025-11-29 08:14:19.154 233728 DEBUG oslo_concurrency.lockutils [req-742e06c4-cfda-4642-b82c-841624e70247 req-d3683351-5843-46eb-b8cc-62647d376816 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:19 np0005539552 nova_compute[233724]: 2025-11-29 08:14:19.154 233728 DEBUG nova.compute.manager [req-742e06c4-cfda-4642-b82c-841624e70247 req-d3683351-5843-46eb-b8cc-62647d376816 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-unplugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:19 np0005539552 nova_compute[233724]: 2025-11-29 08:14:19.155 233728 WARNING nova.compute.manager [req-742e06c4-cfda-4642-b82c-841624e70247 req-d3683351-5843-46eb-b8cc-62647d376816 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received unexpected event network-vif-unplugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:14:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:19.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:19.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.375 233728 DEBUG oslo_concurrency.lockutils [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "interface-c32e74e2-e74f-4877-8130-ad35d31bb992-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.375 233728 DEBUG oslo_concurrency.lockutils [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "interface-c32e74e2-e74f-4877-8130-ad35d31bb992-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.375 233728 DEBUG nova.objects.instance [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'flavor' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.453 233728 DEBUG nova.network.neutron [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.486 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.491 233728 DEBUG oslo_concurrency.lockutils [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.492 233728 DEBUG nova.network.neutron [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.599 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.601 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.601 233728 INFO nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Creating image(s)#033[00m
Nov 29 03:14:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:20.621 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:20.622 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:20.623 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.640 233728 DEBUG nova.storage.rbd_utils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(nova-resize) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.721 233728 DEBUG nova.objects.instance [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'flavor' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.745 233728 DEBUG oslo_concurrency.lockutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.746 233728 DEBUG oslo_concurrency.lockutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquired lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.746 233728 DEBUG nova.network.neutron [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.746 233728 DEBUG nova.objects.instance [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'info_cache' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:20 np0005539552 nova_compute[233724]: 2025-11-29 08:14:20.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:21.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:21Z|00305|binding|INFO|Releasing lport 3c46c58e-3ab3-48f8-ac9f-00200acea5a6 from this chassis (sb_readonly=0)
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.267 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.317 233728 DEBUG nova.compute.manager [req-8014c84d-9836-42c0-93f7-43e76b4cc4bd req-7f19d4ea-9595-41f8-8b95-37c81251dcaf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.318 233728 DEBUG oslo_concurrency.lockutils [req-8014c84d-9836-42c0-93f7-43e76b4cc4bd req-7f19d4ea-9595-41f8-8b95-37c81251dcaf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.318 233728 DEBUG oslo_concurrency.lockutils [req-8014c84d-9836-42c0-93f7-43e76b4cc4bd req-7f19d4ea-9595-41f8-8b95-37c81251dcaf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.318 233728 DEBUG oslo_concurrency.lockutils [req-8014c84d-9836-42c0-93f7-43e76b4cc4bd req-7f19d4ea-9595-41f8-8b95-37c81251dcaf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.319 233728 DEBUG nova.compute.manager [req-8014c84d-9836-42c0-93f7-43e76b4cc4bd req-7f19d4ea-9595-41f8-8b95-37c81251dcaf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.319 233728 WARNING nova.compute.manager [req-8014c84d-9836-42c0-93f7-43e76b4cc4bd req-7f19d4ea-9595-41f8-8b95-37c81251dcaf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received unexpected event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.493 233728 DEBUG nova.objects.instance [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'pci_requests' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:21 np0005539552 nova_compute[233724]: 2025-11-29 08:14:21.519 233728 DEBUG nova.network.neutron [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:21.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.073 233728 DEBUG nova.policy [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90573489491c4659ba4a8ccbd6b896a7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5f1f0d72cd0427a8cda48db244caf6c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.695 233728 DEBUG nova.objects.instance [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.835 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.835 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Ensure instance console log exists: /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.836 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.836 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.836 233728 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.839 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start _get_guest_xml network_info=[{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.844 233728 WARNING nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.854 233728 DEBUG nova.virt.libvirt.host [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.855 233728 DEBUG nova.virt.libvirt.host [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.859 233728 DEBUG nova.virt.libvirt.host [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.859 233728 DEBUG nova.virt.libvirt.host [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.861 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.861 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.862 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.862 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.862 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.862 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.863 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.863 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.863 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.863 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.864 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.864 233728 DEBUG nova.virt.hardware [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.864 233728 DEBUG nova.objects.instance [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.905 233728 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.929 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.982 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.983 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.983 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.983 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:14:22 np0005539552 nova_compute[233724]: 2025-11-29 08:14:22.984 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2256562950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.359 233728 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.402 233728 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3643018250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.437 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.554 233728 DEBUG nova.network.neutron [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updated VIF entry in instance network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.556 233728 DEBUG nova.network.neutron [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.571 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.572 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.575 233728 DEBUG oslo_concurrency.lockutils [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.582 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.582 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.613 233728 DEBUG nova.network.neutron [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Updating instance_info_cache with network_info: [{"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:23.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.653 233728 DEBUG oslo_concurrency.lockutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Releasing lock "refresh_cache-36048c92-5df2-425d-b12f-1ce0326cc6a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.656 233728 DEBUG nova.network.neutron [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Successfully created port: 4d97f024-e964-485a-9511-f23de3e843bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.683 233728 INFO nova.virt.libvirt.driver [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance destroyed successfully.#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.683 233728 DEBUG nova.objects.instance [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'numa_topology' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.698 233728 DEBUG nova.objects.instance [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'resources' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.713 233728 DEBUG nova.virt.libvirt.vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-607468552',display_name='tempest-ListServerFiltersTestJSON-instance-607468552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-607468552',id=90,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-angclfpm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:18Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=36048c92-5df2-425d-b12f-1ce0326cc6a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.713 233728 DEBUG nova.network.os_vif_util [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.714 233728 DEBUG nova.network.os_vif_util [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.715 233728 DEBUG os_vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.717 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.717 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d12c166-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.718 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.720 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.723 233728 INFO os_vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c')#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.729 233728 DEBUG nova.virt.libvirt.driver [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Start _get_guest_xml network_info=[{"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.732 233728 WARNING nova.virt.libvirt.driver [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.735 233728 DEBUG nova.virt.libvirt.host [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.736 233728 DEBUG nova.virt.libvirt.host [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.758 233728 DEBUG nova.virt.libvirt.host [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.759 233728 DEBUG nova.virt.libvirt.host [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.760 233728 DEBUG nova.virt.libvirt.driver [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.760 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.761 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.761 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.761 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.762 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.762 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.762 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.762 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.763 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.763 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.763 233728 DEBUG nova.virt.hardware [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.763 233728 DEBUG nova.objects.instance [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.795 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.796 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4262MB free_disk=20.715377807617188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.796 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.796 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3644680923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.888 233728 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.889 233728 DEBUG nova.virt.libvirt.vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.889 233728 DEBUG nova.network.os_vif_util [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.890 233728 DEBUG nova.network.os_vif_util [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.892 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <uuid>19e85fae-c57e-409b-95f7-b53ddb4c928e</uuid>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <name>instance-00000055</name>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <memory>196608</memory>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestOtherB-server-1456117084</nova:name>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:14:22</nova:creationTime>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.micro">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:memory>192</nova:memory>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <nova:port uuid="d8b38a34-8274-43e4-8ebd-3924de5c5ba7">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <entry name="serial">19e85fae-c57e-409b-95f7-b53ddb4c928e</entry>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <entry name="uuid">19e85fae-c57e-409b-95f7-b53ddb4c928e</entry>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:de:2f:2f"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <target dev="tapd8b38a34-82"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/console.log" append="off"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:14:23 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:23 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:23 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:23 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.894 233728 DEBUG nova.virt.libvirt.vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.895 233728 DEBUG nova.network.os_vif_util [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.896 233728 DEBUG nova.network.os_vif_util [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.896 233728 DEBUG os_vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.897 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.897 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.898 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.901 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.901 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8b38a34-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.902 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8b38a34-82, col_values=(('external_ids', {'iface-id': 'd8b38a34-8274-43e4-8ebd-3924de5c5ba7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:2f:2f', 'vm-uuid': '19e85fae-c57e-409b-95f7-b53ddb4c928e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.903 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 NetworkManager[48926]: <info>  [1764404063.9047] manager: (tapd8b38a34-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.906 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.909 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.910 233728 INFO os_vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.958 233728 DEBUG oslo_concurrency.processutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.987 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Applying migration context for instance 19e85fae-c57e-409b-95f7-b53ddb4c928e as it has an incoming, in-progress migration 0444e4d7-f620-4d8e-910b-fc76bd03562e. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Nov 29 03:14:23 np0005539552 nova_compute[233724]: 2025-11-29 08:14:23.988 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating resource usage from migration 0444e4d7-f620-4d8e-910b-fc76bd03562e#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.007 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.007 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.007 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:de:2f:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.008 233728 INFO nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Using config drive#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.042 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 36048c92-5df2-425d-b12f-1ce0326cc6a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.042 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance c32e74e2-e74f-4877-8130-ad35d31bb992 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.043 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 19e85fae-c57e-409b-95f7-b53ddb4c928e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.043 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.043 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.049 233728 DEBUG nova.compute.manager [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.049 233728 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.373 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/41822705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.436 233728 DEBUG oslo_concurrency.processutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.483 233728 DEBUG oslo_concurrency.processutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2524879554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.893 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.900 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3778867204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.919 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.942 233728 DEBUG oslo_concurrency.processutils [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.943 233728 DEBUG nova.virt.libvirt.vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-607468552',display_name='tempest-ListServerFiltersTestJSON-instance-607468552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-607468552',id=90,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-angclfpm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:18Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=36048c92-5df2-425d-b12f-1ce0326cc6a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.943 233728 DEBUG nova.network.os_vif_util [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.944 233728 DEBUG nova.network.os_vif_util [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.945 233728 DEBUG nova.objects.instance [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.966 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.966 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.967 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.970 233728 DEBUG nova.virt.libvirt.driver [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <uuid>36048c92-5df2-425d-b12f-1ce0326cc6a1</uuid>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <name>instance-0000005a</name>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-607468552</nova:name>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:14:23</nova:creationTime>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:user uuid="05e59f4debd946ad9b7a4bac0e968bc6">tempest-ListServerFiltersTestJSON-825347861-project-member</nova:user>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:project uuid="17c0ff0fdeac43fc8fa0d7bedad67c34">tempest-ListServerFiltersTestJSON-825347861</nova:project>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <nova:port uuid="1d12c166-4cae-49ec-ab9b-149d65ceb0b6">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <entry name="serial">36048c92-5df2-425d-b12f-1ce0326cc6a1</entry>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <entry name="uuid">36048c92-5df2-425d-b12f-1ce0326cc6a1</entry>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/36048c92-5df2-425d-b12f-1ce0326cc6a1_disk">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/36048c92-5df2-425d-b12f-1ce0326cc6a1_disk.config">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b2:af:cf"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <target dev="tap1d12c166-4c"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1/console.log" append="off"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:14:24 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:24 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:24 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:24 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.971 233728 DEBUG nova.virt.libvirt.driver [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.971 233728 DEBUG nova.virt.libvirt.driver [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.972 233728 DEBUG nova.virt.libvirt.vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-607468552',display_name='tempest-ListServerFiltersTestJSON-instance-607468552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-607468552',id=90,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-angclfpm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:18Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=36048c92-5df2-425d-b12f-1ce0326cc6a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.972 233728 DEBUG nova.network.os_vif_util [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.973 233728 DEBUG nova.network.os_vif_util [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.973 233728 DEBUG os_vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.974 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.974 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.975 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.977 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d12c166-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.978 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d12c166-4c, col_values=(('external_ids', {'iface-id': '1d12c166-4cae-49ec-ab9b-149d65ceb0b6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:af:cf', 'vm-uuid': '36048c92-5df2-425d-b12f-1ce0326cc6a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.979 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539552 NetworkManager[48926]: <info>  [1764404064.9807] manager: (tap1d12c166-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.982 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539552 nova_compute[233724]: 2025-11-29 08:14:24.987 233728 INFO os_vif [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c')#033[00m
Nov 29 03:14:25 np0005539552 kernel: tap1d12c166-4c: entered promiscuous mode
Nov 29 03:14:25 np0005539552 NetworkManager[48926]: <info>  [1764404065.0479] manager: (tap1d12c166-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Nov 29 03:14:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:25Z|00306|binding|INFO|Claiming lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for this chassis.
Nov 29 03:14:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:25Z|00307|binding|INFO|1d12c166-4cae-49ec-ab9b-149d65ceb0b6: Claiming fa:16:3e:b2:af:cf 10.100.0.10
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.051 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.062 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:af:cf 10.100.0.10'], port_security=['fa:16:3e:b2:af:cf 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '36048c92-5df2-425d-b12f-1ce0326cc6a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1d12c166-4cae-49ec-ab9b-149d65ceb0b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.063 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 bound to our chassis#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.064 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ce08321-9ca9-47d5-b99b-65a439440787#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.075 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[865a498e-a953-4085-a270-5df695bab2eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.076 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ce08321-91 in ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:25 np0005539552 systemd-machined[196379]: New machine qemu-35-instance-0000005a.
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.077 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ce08321-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.077 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bcba6563-2577-4b9e-a581-9be03fbca1fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.078 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[694f7759-ffb6-455b-8a0b-55d07ff934c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 systemd-udevd[270845]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:25 np0005539552 NetworkManager[48926]: <info>  [1764404065.0914] device (tap1d12c166-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.091 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[94c64fc2-6868-4536-8d68-a0aee032b74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 NetworkManager[48926]: <info>  [1764404065.0932] device (tap1d12c166-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:25 np0005539552 systemd[1]: Started Virtual Machine qemu-35-instance-0000005a.
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.101 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:25Z|00308|binding|INFO|Setting lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 ovn-installed in OVS
Nov 29 03:14:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:25Z|00309|binding|INFO|Setting lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 up in Southbound
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.104 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.117 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3aca4dc3-f946-4b38-b759-757ca72c88f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.145 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1729aab5-290b-4ba5-b3d4-fa010a2c3725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.150 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[19a3c7d3-42d5-4f79-91c3-45544ddb3545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 systemd-udevd[270848]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:25 np0005539552 NetworkManager[48926]: <info>  [1764404065.1525] manager: (tap5ce08321-90): new Veth device (/org/freedesktop/NetworkManager/Devices/162)
Nov 29 03:14:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:25.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.184 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8f45fa82-d4e5-4b16-b0b5-f92e26f58018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.188 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4ffc74cc-a4b9-4579-980c-bac14820157b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 NetworkManager[48926]: <info>  [1764404065.2093] device (tap5ce08321-90): carrier: link connected
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.214 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b26848c1-6978-456b-8044-e7941a5fa0f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.230 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[77094859-73b4-4c1b-ba8d-2e30d70596ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711089, 'reachable_time': 25483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270881, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.245 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[057923eb-c31c-405c-b5ee-e35b3e0a42cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:bc0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711089, 'tstamp': 711089}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270882, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.259 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b03a75a8-5453-4b55-9fe3-37028ba73330]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711089, 'reachable_time': 25483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270883, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.293 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5d3564-1f39-47d9-970c-1e62abb31198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.346 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a750022c-8d6f-4298-8b88-c33a87cec558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.347 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.347 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.348 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ce08321-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.349 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 NetworkManager[48926]: <info>  [1764404065.3502] manager: (tap5ce08321-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Nov 29 03:14:25 np0005539552 kernel: tap5ce08321-90: entered promiscuous mode
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.355 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.357 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ce08321-90, col_values=(('external_ids', {'iface-id': 'fb53c57a-d19f-4391-add7-afa34095fb59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.358 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:25Z|00310|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.374 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.382 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.383 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:25 np0005539552 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.385 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fc602b2e-d76d-4005-bc65-1a207c97bc31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.386 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:25 np0005539552 systemd[270403]: Activating special unit Exit the Session...
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped target Main User Target.
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped target Basic System.
Nov 29 03:14:25 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:25.387 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'env', 'PROCESS_TAG=haproxy-5ce08321-9ca9-47d5-b99b-65a439440787', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ce08321-9ca9-47d5-b99b-65a439440787.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped target Paths.
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped target Sockets.
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped target Timers.
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:14:25 np0005539552 systemd[270403]: Closed D-Bus User Message Bus Socket.
Nov 29 03:14:25 np0005539552 systemd[270403]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:14:25 np0005539552 systemd[270403]: Removed slice User Application Slice.
Nov 29 03:14:25 np0005539552 systemd[270403]: Reached target Shutdown.
Nov 29 03:14:25 np0005539552 systemd[270403]: Finished Exit the Session.
Nov 29 03:14:25 np0005539552 systemd[270403]: Reached target Exit the Session.
Nov 29 03:14:25 np0005539552 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:14:25 np0005539552 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:14:25 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:14:25 np0005539552 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:14:25 np0005539552 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:14:25 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:14:25 np0005539552 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:14:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:25 np0005539552 podman[270949]: 2025-11-29 08:14:25.747379421 +0000 UTC m=+0.053134638 container create 95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:14:25 np0005539552 systemd[1]: Started libpod-conmon-95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8.scope.
Nov 29 03:14:25 np0005539552 podman[270949]: 2025-11-29 08:14:25.720476109 +0000 UTC m=+0.026231346 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:25 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:14:25 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596e38491023d84493ca2f51c4ed9be876969181fdbe903392d31bdbeee31bb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.841 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 36048c92-5df2-425d-b12f-1ce0326cc6a1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.841 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404065.8406503, 36048c92-5df2-425d-b12f-1ce0326cc6a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.842 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:25 np0005539552 podman[270949]: 2025-11-29 08:14:25.843610267 +0000 UTC m=+0.149365494 container init 95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.843 233728 DEBUG nova.compute.manager [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.846 233728 INFO nova.virt.libvirt.driver [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance rebooted successfully.#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.846 233728 DEBUG nova.compute.manager [None req-3402e873-e678-49df-95ee-316d29d0ada7 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:25 np0005539552 podman[270949]: 2025-11-29 08:14:25.848810277 +0000 UTC m=+0.154565484 container start 95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:14:25 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [NOTICE]   (270983) : New worker (270985) forked
Nov 29 03:14:25 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [NOTICE]   (270983) : Loading success.
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.881 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.884 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.890 233728 DEBUG nova.compute.manager [req-d1e771eb-7565-4c0c-95b4-c015755191f2 req-411a0cea-cce9-4d77-99db-43dd69203b77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.890 233728 DEBUG oslo_concurrency.lockutils [req-d1e771eb-7565-4c0c-95b4-c015755191f2 req-411a0cea-cce9-4d77-99db-43dd69203b77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.891 233728 DEBUG oslo_concurrency.lockutils [req-d1e771eb-7565-4c0c-95b4-c015755191f2 req-411a0cea-cce9-4d77-99db-43dd69203b77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.891 233728 DEBUG oslo_concurrency.lockutils [req-d1e771eb-7565-4c0c-95b4-c015755191f2 req-411a0cea-cce9-4d77-99db-43dd69203b77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.891 233728 DEBUG nova.compute.manager [req-d1e771eb-7565-4c0c-95b4-c015755191f2 req-411a0cea-cce9-4d77-99db-43dd69203b77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.891 233728 WARNING nova.compute.manager [req-d1e771eb-7565-4c0c-95b4-c015755191f2 req-411a0cea-cce9-4d77-99db-43dd69203b77 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received unexpected event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.905 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.905 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404065.8434415, 36048c92-5df2-425d-b12f-1ce0326cc6a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.906 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.944 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.947 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:25 np0005539552 nova_compute[233724]: 2025-11-29 08:14:25.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:26 np0005539552 nova_compute[233724]: 2025-11-29 08:14:26.289 233728 DEBUG nova.network.neutron [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Successfully updated port: 4d97f024-e964-485a-9511-f23de3e843bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:26 np0005539552 nova_compute[233724]: 2025-11-29 08:14:26.320 233728 DEBUG oslo_concurrency.lockutils [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:26 np0005539552 nova_compute[233724]: 2025-11-29 08:14:26.321 233728 DEBUG oslo_concurrency.lockutils [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquired lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:26 np0005539552 nova_compute[233724]: 2025-11-29 08:14:26.321 233728 DEBUG nova.network.neutron [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:26 np0005539552 podman[270996]: 2025-11-29 08:14:26.972737622 +0000 UTC m=+0.061680378 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:14:26 np0005539552 podman[270995]: 2025-11-29 08:14:26.996535611 +0000 UTC m=+0.087707947 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:14:27 np0005539552 podman[270997]: 2025-11-29 08:14:27.006771136 +0000 UTC m=+0.087682837 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:14:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:27.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.094 233728 DEBUG nova.compute.manager [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.095 233728 DEBUG oslo_concurrency.lockutils [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.095 233728 DEBUG oslo_concurrency.lockutils [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.095 233728 DEBUG oslo_concurrency.lockutils [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.095 233728 DEBUG nova.compute.manager [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.096 233728 WARNING nova.compute.manager [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received unexpected event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.096 233728 DEBUG nova.compute.manager [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-changed-4d97f024-e964-485a-9511-f23de3e843bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.096 233728 DEBUG nova.compute.manager [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Refreshing instance network info cache due to event network-changed-4d97f024-e964-485a-9511-f23de3e843bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.097 233728 DEBUG oslo_concurrency.lockutils [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.976 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.976 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.976 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.977 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539552 nova_compute[233724]: 2025-11-29 08:14:28.977 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:14:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:29.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:29.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:29 np0005539552 nova_compute[233724]: 2025-11-29 08:14:29.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:29 np0005539552 nova_compute[233724]: 2025-11-29 08:14:29.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:14:29 np0005539552 nova_compute[233724]: 2025-11-29 08:14:29.926 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:14:29 np0005539552 nova_compute[233724]: 2025-11-29 08:14:29.980 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.261 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.261 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.262 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.262 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.433 233728 DEBUG nova.network.neutron [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.461 233728 DEBUG oslo_concurrency.lockutils [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Releasing lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.462 233728 DEBUG oslo_concurrency.lockutils [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.462 233728 DEBUG nova.network.neutron [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Refreshing network info cache for port 4d97f024-e964-485a-9511-f23de3e843bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.468 233728 DEBUG nova.virt.libvirt.vif [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.469 233728 DEBUG nova.network.os_vif_util [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.470 233728 DEBUG nova.network.os_vif_util [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.471 233728 DEBUG os_vif [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.472 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.472 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.476 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d97f024-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.477 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d97f024-e9, col_values=(('external_ids', {'iface-id': '4d97f024-e964-485a-9511-f23de3e843bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:85:80', 'vm-uuid': 'c32e74e2-e74f-4877-8130-ad35d31bb992'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.4798] manager: (tap4d97f024-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.491 233728 INFO os_vif [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9')#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.493 233728 DEBUG nova.virt.libvirt.vif [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.493 233728 DEBUG nova.network.os_vif_util [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.495 233728 DEBUG nova.network.os_vif_util [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.498 233728 DEBUG nova.virt.libvirt.guest [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:6b:85:80"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <target dev="tap4d97f024-e9"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:14:30 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.5109] manager: (tap4d97f024-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Nov 29 03:14:30 np0005539552 kernel: tap4d97f024-e9: entered promiscuous mode
Nov 29 03:14:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:30Z|00311|binding|INFO|Claiming lport 4d97f024-e964-485a-9511-f23de3e843bd for this chassis.
Nov 29 03:14:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:30Z|00312|binding|INFO|4d97f024-e964-485a-9511-f23de3e843bd: Claiming fa:16:3e:6b:85:80 10.10.10.234
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.524 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.531 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:85:80 10.10.10.234'], port_security=['fa:16:3e:6b:85:80 10.10.10.234'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.234/24', 'neutron:device_id': 'c32e74e2-e74f-4877-8130-ad35d31bb992', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5f1f0d72cd0427a8cda48db244caf6c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d40c246-c885-4bc9-95ef-0d78f2d567f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7b80ba9-0bb6-48e6-9b10-8894ef6a7f23, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=4d97f024-e964-485a-9511-f23de3e843bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.533 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 4d97f024-e964-485a-9511-f23de3e843bd in datapath 630c713d-8e9c-44d9-9de3-fab9b04bc799 bound to our chassis#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.540 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 630c713d-8e9c-44d9-9de3-fab9b04bc799#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.555 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c2abb77d-e0e6-4ea8-9a5e-26b995f03fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.556 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap630c713d-81 in ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.558 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap630c713d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.558 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3589c5c7-91a2-44da-a978-1061720a9ebc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.559 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b8da24fa-a499-451e-a3fe-889ca4744b65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:30Z|00313|binding|INFO|Setting lport 4d97f024-e964-485a-9511-f23de3e843bd ovn-installed in OVS
Nov 29 03:14:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:30Z|00314|binding|INFO|Setting lport 4d97f024-e964-485a-9511-f23de3e843bd up in Southbound
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.572 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.575 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cb8b3d-cb88-497b-adfa-7f0da8ceb4c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 systemd-udevd[271069]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.6011] device (tap4d97f024-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.6020] device (tap4d97f024-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.603 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7809c7-6953-4990-80ab-6d6aecbb5ac7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.607 233728 DEBUG nova.virt.libvirt.driver [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.607 233728 DEBUG nova.virt.libvirt.driver [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.607 233728 DEBUG nova.virt.libvirt.driver [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No VIF found with MAC fa:16:3e:91:96:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.635 233728 DEBUG nova.virt.libvirt.guest [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:30</nova:creationTime>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:30 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    <nova:port uuid="4d97f024-e964-485a-9511-f23de3e843bd">
Nov 29 03:14:30 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.10.10.234" ipVersion="4"/>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:30 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:30 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:30 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.636 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[adbb271b-d9b1-4d59-a52c-956ce057c7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.6423] manager: (tap630c713d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.641 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[28e657a0-3977-4dca-8673-1e078cc13e6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.668 233728 DEBUG oslo_concurrency.lockutils [None req-f148131f-918c-4a8f-91e5-6f7339e94781 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "interface-c32e74e2-e74f-4877-8130-ad35d31bb992-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.673 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d49270-add4-4bf0-a4c2-df4dce65e99e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.676 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[db3742a0-585f-4945-a7da-4adddef9b980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.7057] device (tap630c713d-80): carrier: link connected
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.712 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4d80e60f-00c7-49f5-b8e6-01032d8889ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.729 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d1fe4-bfec-446c-83f8-05754afbe183]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap630c713d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:34:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711639, 'reachable_time': 38313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271100, 'error': None, 'target': 'ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.744 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3581b261-6c81-44c6-8de9-5bd0eea534c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:344f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711639, 'tstamp': 711639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271101, 'error': None, 'target': 'ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.759 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7d3f9b-d39f-481e-981c-4ab2c482ff27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap630c713d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:34:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711639, 'reachable_time': 38313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271102, 'error': None, 'target': 'ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.789 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a7652bec-4cdf-471e-b953-577d1b3750b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.841 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1885b5-06ca-4cae-a747-ac9ba8331783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.843 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap630c713d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.843 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.843 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap630c713d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:30 np0005539552 NetworkManager[48926]: <info>  [1764404070.8460] manager: (tap630c713d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.845 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 kernel: tap630c713d-80: entered promiscuous mode
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.852 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap630c713d-80, col_values=(('external_ids', {'iface-id': '81fdd8b0-2632-436b-afa3-4e8c10b5905f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.850 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.853 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:30Z|00315|binding|INFO|Releasing lport 81fdd8b0-2632-436b-afa3-4e8c10b5905f from this chassis (sb_readonly=0)
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.868 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.875 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/630c713d-8e9c-44d9-9de3-fab9b04bc799.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/630c713d-8e9c-44d9-9de3-fab9b04bc799.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.876 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b01554-e727-4934-8010-5606e6bb7cc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.876 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-630c713d-8e9c-44d9-9de3-fab9b04bc799
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/630c713d-8e9c-44d9-9de3-fab9b04bc799.pid.haproxy
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 630c713d-8e9c-44d9-9de3-fab9b04bc799
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:30.877 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'env', 'PROCESS_TAG=haproxy-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/630c713d-8e9c-44d9-9de3-fab9b04bc799.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:30 np0005539552 nova_compute[233724]: 2025-11-29 08:14:30.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:31.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:31 np0005539552 podman[271137]: 2025-11-29 08:14:31.247788588 +0000 UTC m=+0.057224809 container create 08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:14:31 np0005539552 systemd[1]: Started libpod-conmon-08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f.scope.
Nov 29 03:14:31 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:14:31 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7064b57ff2beef19a21a05b072b7db42be44bb2108761323c0f787cdec90522e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:31 np0005539552 podman[271137]: 2025-11-29 08:14:31.221334197 +0000 UTC m=+0.030770418 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:31 np0005539552 podman[271137]: 2025-11-29 08:14:31.328381383 +0000 UTC m=+0.137817594 container init 08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:14:31 np0005539552 podman[271137]: 2025-11-29 08:14:31.334334263 +0000 UTC m=+0.143770464 container start 08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:14:31 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [NOTICE]   (271156) : New worker (271158) forked
Nov 29 03:14:31 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [NOTICE]   (271156) : Loading success.
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.470 233728 DEBUG nova.compute.manager [req-f4824113-b3cf-4d82-bdb0-bc75843d2963 req-09458613-6e13-48f1-9850-a97067a22bd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.470 233728 DEBUG oslo_concurrency.lockutils [req-f4824113-b3cf-4d82-bdb0-bc75843d2963 req-09458613-6e13-48f1-9850-a97067a22bd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.472 233728 DEBUG oslo_concurrency.lockutils [req-f4824113-b3cf-4d82-bdb0-bc75843d2963 req-09458613-6e13-48f1-9850-a97067a22bd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.472 233728 DEBUG oslo_concurrency.lockutils [req-f4824113-b3cf-4d82-bdb0-bc75843d2963 req-09458613-6e13-48f1-9850-a97067a22bd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.472 233728 DEBUG nova.compute.manager [req-f4824113-b3cf-4d82-bdb0-bc75843d2963 req-09458613-6e13-48f1-9850-a97067a22bd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.472 233728 WARNING nova.compute.manager [req-f4824113-b3cf-4d82-bdb0-bc75843d2963 req-09458613-6e13-48f1-9850-a97067a22bd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received unexpected event network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 03:14:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:31.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.716 233728 DEBUG oslo_concurrency.lockutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.717 233728 DEBUG oslo_concurrency.lockutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.735 233728 DEBUG nova.objects.instance [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'flavor' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:31 np0005539552 nova_compute[233724]: 2025-11-29 08:14:31.808 233728 DEBUG oslo_concurrency.lockutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.661 233728 DEBUG oslo_concurrency.lockutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.661 233728 DEBUG oslo_concurrency.lockutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.662 233728 INFO nova.compute.manager [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Attaching volume e6cdb61f-1c62-49d7-8d64-b8125afce54e to /dev/vdb#033[00m
Nov 29 03:14:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:32Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:85:80 10.10.10.234
Nov 29 03:14:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:32Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:85:80 10.10.10.234
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.911 233728 DEBUG nova.network.neutron [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updated VIF entry in instance network info cache for port 4d97f024-e964-485a-9511-f23de3e843bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.911 233728 DEBUG nova.network.neutron [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.931 233728 DEBUG oslo_concurrency.lockutils [req-4aab0201-67d9-4215-bde2-cacb194ce0c6 req-e4e3a894-09b6-4f76-9c55-8b2ded235ff3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.990 233728 DEBUG os_brick.utils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:14:32 np0005539552 nova_compute[233724]: 2025-11-29 08:14:32.992 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.006 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.007 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e448a5c3-3192-4fac-8c43-c3bd16ba1ed3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.009 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.017 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.018 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[317e03d8-2c0b-49cb-bfcc-0279f0250864]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.019 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.027 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.027 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[12cb753c-b60e-473b-8f5d-52d9eaaba957]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.029 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[57efb92c-44ad-4342-a179-a1f58199178c]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.030 233728 DEBUG oslo_concurrency.processutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.055 233728 DEBUG oslo_concurrency.processutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.057 233728 DEBUG os_brick.initiator.connectors.lightos [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.057 233728 DEBUG os_brick.initiator.connectors.lightos [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.057 233728 DEBUG os_brick.initiator.connectors.lightos [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.058 233728 DEBUG os_brick.utils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.058 233728 DEBUG nova.virt.block_device [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating existing volume attachment record: c947711f-ccd0-49be-844b-6a633b111615 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:14:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.419 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.454 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.454 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.454 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.455 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:33.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.685 233728 DEBUG nova.compute.manager [req-a55e03ce-d9d3-4589-91f0-1fdb15ce0998 req-39957a28-098a-4de7-9f37-fc9f00aeb7b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.686 233728 DEBUG oslo_concurrency.lockutils [req-a55e03ce-d9d3-4589-91f0-1fdb15ce0998 req-39957a28-098a-4de7-9f37-fc9f00aeb7b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.686 233728 DEBUG oslo_concurrency.lockutils [req-a55e03ce-d9d3-4589-91f0-1fdb15ce0998 req-39957a28-098a-4de7-9f37-fc9f00aeb7b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.686 233728 DEBUG oslo_concurrency.lockutils [req-a55e03ce-d9d3-4589-91f0-1fdb15ce0998 req-39957a28-098a-4de7-9f37-fc9f00aeb7b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.687 233728 DEBUG nova.compute.manager [req-a55e03ce-d9d3-4589-91f0-1fdb15ce0998 req-39957a28-098a-4de7-9f37-fc9f00aeb7b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.687 233728 WARNING nova.compute.manager [req-a55e03ce-d9d3-4589-91f0-1fdb15ce0998 req-39957a28-098a-4de7-9f37-fc9f00aeb7b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received unexpected event network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:33 np0005539552 nova_compute[233724]: 2025-11-29 08:14:33.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.074 233728 DEBUG nova.objects.instance [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'flavor' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.102 233728 DEBUG nova.virt.libvirt.driver [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Attempting to attach volume e6cdb61f-1c62-49d7-8d64-b8125afce54e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.107 233728 DEBUG nova.virt.libvirt.guest [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-e6cdb61f-1c62-49d7-8d64-b8125afce54e">
Nov 29 03:14:34 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:14:34 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:14:34 np0005539552 nova_compute[233724]:  <serial>e6cdb61f-1c62-49d7-8d64-b8125afce54e</serial>
Nov 29 03:14:34 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:14:34 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.403 233728 DEBUG nova.virt.libvirt.driver [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.404 233728 DEBUG nova.virt.libvirt.driver [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.404 233728 DEBUG nova.virt.libvirt.driver [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] No VIF found with MAC fa:16:3e:91:96:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:34 np0005539552 nova_compute[233724]: 2025-11-29 08:14:34.808 233728 DEBUG oslo_concurrency.lockutils [None req-9909a82f-c63a-4dcf-a85e-65f61bd76419 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.561 233728 DEBUG nova.objects.instance [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.598 233728 DEBUG oslo_concurrency.lockutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.598 233728 DEBUG oslo_concurrency.lockutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.599 233728 DEBUG nova.network.neutron [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.599 233728 DEBUG nova.objects.instance [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'info_cache' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:35.661 143505 DEBUG eventlet.wsgi.server [-] (143505) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:35.662 143505 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: Accept: */*#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: Connection: close#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: Content-Type: text/plain#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: Host: 169.254.169.254#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: User-Agent: curl/7.84.0#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: X-Forwarded-For: 10.100.0.12#015
Nov 29 03:14:35 np0005539552 ovn_metadata_agent[143394]: X-Ovn-Network-Id: 88cc8f67-0d68-413a-b508-63fae18f1c0c __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:14:35 np0005539552 nova_compute[233724]: 2025-11-29 08:14:35.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:36 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:36.938 143505 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:14:36 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:36.939 143505 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1916 time: 1.2769904#033[00m
Nov 29 03:14:36 np0005539552 haproxy-metadata-proxy-88cc8f67-0d68-413a-b508-63fae18f1c0c[270354]: 10.100.0.12:47110 [29/Nov/2025:08:14:35.659] listener listener/metadata 0/0/0/1279/1279 200 1900 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 03:14:36 np0005539552 nova_compute[233724]: 2025-11-29 08:14:36.943 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:37.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.697 233728 DEBUG oslo_concurrency.lockutils [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.698 233728 DEBUG oslo_concurrency.lockutils [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.715 233728 INFO nova.compute.manager [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Detaching volume e6cdb61f-1c62-49d7-8d64-b8125afce54e#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.882 233728 INFO nova.virt.block_device [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Attempting to driver detach volume e6cdb61f-1c62-49d7-8d64-b8125afce54e from mountpoint /dev/vdb#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.889 233728 DEBUG nova.virt.libvirt.driver [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Attempting to detach device vdb from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.890 233728 DEBUG nova.virt.libvirt.guest [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-e6cdb61f-1c62-49d7-8d64-b8125afce54e">
Nov 29 03:14:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <serial>e6cdb61f-1c62-49d7-8d64-b8125afce54e</serial>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:14:37 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.896 233728 INFO nova.virt.libvirt.driver [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully detached device vdb from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the persistent domain config.#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.896 233728 DEBUG nova.virt.libvirt.driver [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.896 233728 DEBUG nova.virt.libvirt.guest [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-e6cdb61f-1c62-49d7-8d64-b8125afce54e">
Nov 29 03:14:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <serial>e6cdb61f-1c62-49d7-8d64-b8125afce54e</serial>
Nov 29 03:14:37 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:14:37 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:14:37 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.992 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764404077.992462, c32e74e2-e74f-4877-8130-ad35d31bb992 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.995 233728 DEBUG nova.virt.libvirt.driver [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c32e74e2-e74f-4877-8130-ad35d31bb992 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:14:37 np0005539552 nova_compute[233724]: 2025-11-29 08:14:37.997 233728 INFO nova.virt.libvirt.driver [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully detached device vdb from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the live domain config.#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.138 233728 DEBUG nova.objects.instance [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'flavor' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.203 233728 DEBUG oslo_concurrency.lockutils [None req-b0a3b8f3-323e-47d5-b61f-0855861939f4 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.266 233728 DEBUG nova.network.neutron [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.285 233728 DEBUG oslo_concurrency.lockutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.315 233728 INFO nova.virt.libvirt.driver [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance destroyed successfully.#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.315 233728 DEBUG nova.objects.instance [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.327 233728 DEBUG nova.objects.instance [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.339 233728 DEBUG nova.virt.libvirt.vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.339 233728 DEBUG nova.network.os_vif_util [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.340 233728 DEBUG nova.network.os_vif_util [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.341 233728 DEBUG os_vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.343 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.343 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8b38a34-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.345 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.348 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.349 233728 INFO os_vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.355 233728 DEBUG nova.virt.libvirt.driver [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start _get_guest_xml network_info=[{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.358 233728 WARNING nova.virt.libvirt.driver [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.362 233728 DEBUG nova.virt.libvirt.host [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.362 233728 DEBUG nova.virt.libvirt.host [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.365 233728 DEBUG nova.virt.libvirt.host [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.365 233728 DEBUG nova.virt.libvirt.host [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.366 233728 DEBUG nova.virt.libvirt.driver [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.367 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.367 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.367 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.368 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.368 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.368 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.368 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.369 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.369 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.369 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.369 233728 DEBUG nova.virt.hardware [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.370 233728 DEBUG nova.objects.instance [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.383 233728 DEBUG oslo_concurrency.processutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2827016566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.811 233728 DEBUG oslo_concurrency.processutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:38 np0005539552 nova_compute[233724]: 2025-11-29 08:14:38.885 233728 DEBUG oslo_concurrency.processutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:14:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1408196040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:14:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:14:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1408196040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.063 233728 DEBUG oslo_concurrency.lockutils [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "interface-c32e74e2-e74f-4877-8130-ad35d31bb992-4d97f024-e964-485a-9511-f23de3e843bd" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.064 233728 DEBUG oslo_concurrency.lockutils [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "interface-c32e74e2-e74f-4877-8130-ad35d31bb992-4d97f024-e964-485a-9511-f23de3e843bd" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:af:cf 10.100.0.10
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.080 233728 DEBUG nova.objects.instance [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'flavor' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.121 233728 DEBUG nova.virt.libvirt.vif [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.122 233728 DEBUG nova.network.os_vif_util [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.122 233728 DEBUG nova.network.os_vif_util [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.125 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.127 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.129 233728 DEBUG nova.virt.libvirt.driver [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Attempting to detach device tap4d97f024-e9 from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.130 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:6b:85:80"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <target dev="tap4d97f024-e9"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.135 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.138 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <name>instance-0000005c</name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <uuid>c32e74e2-e74f-4877-8130-ad35d31bb992</uuid>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:30</nova:creationTime>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:port uuid="4d97f024-e964-485a-9511-f23de3e843bd">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.10.10.234" ipVersion="4"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='serial'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='uuid'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk' index='2'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config' index='1'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:91:96:6a'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='tap2dc39626-7a'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:6b:85:80'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='tap4d97f024-e9'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='net1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c381,c897</label>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c381,c897</imagelabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.140 233728 INFO nova.virt.libvirt.driver [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully detached device tap4d97f024-e9 from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the persistent domain config.#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.140 233728 DEBUG nova.virt.libvirt.driver [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] (1/8): Attempting to detach device tap4d97f024-e9 with device alias net1 from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.140 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <mac address="fa:16:3e:6b:85:80"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <model type="virtio"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <mtu size="1442"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <target dev="tap4d97f024-e9"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </interface>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:14:39 np0005539552 kernel: tap4d97f024-e9 (unregistering): left promiscuous mode
Nov 29 03:14:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.1890] device (tap4d97f024-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.209 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764404079.2088196, c32e74e2-e74f-4877-8130-ad35d31bb992 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.210 233728 DEBUG nova.virt.libvirt.driver [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Start waiting for the detach event from libvirt for device tap4d97f024-e9 with device alias net1 for instance c32e74e2-e74f-4877-8130-ad35d31bb992 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.210 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.214 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00316|binding|INFO|Releasing lport 4d97f024-e964-485a-9511-f23de3e843bd from this chassis (sb_readonly=0)
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00317|binding|INFO|Setting lport 4d97f024-e964-485a-9511-f23de3e843bd down in Southbound
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00318|binding|INFO|Removing iface tap4d97f024-e9 ovn-installed in OVS
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.216 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.218 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <name>instance-0000005c</name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <uuid>c32e74e2-e74f-4877-8130-ad35d31bb992</uuid>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:30</nova:creationTime>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:port uuid="4d97f024-e964-485a-9511-f23de3e843bd">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.10.10.234" ipVersion="4"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='serial'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='uuid'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk' index='2'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config' index='1'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:91:96:6a'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev='tap2dc39626-7a'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c381,c897</label>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c381,c897</imagelabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.222 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:85:80 10.10.10.234'], port_security=['fa:16:3e:6b:85:80 10.10.10.234'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.234/24', 'neutron:device_id': 'c32e74e2-e74f-4877-8130-ad35d31bb992', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5f1f0d72cd0427a8cda48db244caf6c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d40c246-c885-4bc9-95ef-0d78f2d567f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d7b80ba9-0bb6-48e6-9b10-8894ef6a7f23, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=4d97f024-e964-485a-9511-f23de3e843bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.224 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 4d97f024-e964-485a-9511-f23de3e843bd in datapath 630c713d-8e9c-44d9-9de3-fab9b04bc799 unbound from our chassis#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.225 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 630c713d-8e9c-44d9-9de3-fab9b04bc799, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.226 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[86b6c726-248a-41ca-a197-300818aa1040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.233 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799 namespace which is not needed anymore#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.232 233728 INFO nova.virt.libvirt.driver [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully detached device tap4d97f024-e9 from instance c32e74e2-e74f-4877-8130-ad35d31bb992 from the live domain config.#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.232 233728 DEBUG nova.virt.libvirt.vif [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.233 233728 DEBUG nova.network.os_vif_util [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.233 233728 DEBUG nova.network.os_vif_util [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.234 233728 DEBUG os_vif [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.235 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.235 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d97f024-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.238 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.241 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.243 233728 INFO os_vif [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9')#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.244 233728 DEBUG nova.virt.libvirt.guest [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:39</nova:creationTime>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:14:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1075511368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:39 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [NOTICE]   (271156) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:39 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [NOTICE]   (271156) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:39 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [WARNING]  (271156) : Exiting Master process...
Nov 29 03:14:39 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [WARNING]  (271156) : Exiting Master process...
Nov 29 03:14:39 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [ALERT]    (271156) : Current worker (271158) exited with code 143 (Terminated)
Nov 29 03:14:39 np0005539552 neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799[271152]: [WARNING]  (271156) : All workers exited. Exiting... (0)
Nov 29 03:14:39 np0005539552 systemd[1]: libpod-08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f.scope: Deactivated successfully.
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.353 233728 DEBUG oslo_concurrency.processutils [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.355 233728 DEBUG nova.virt.libvirt.vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.355 233728 DEBUG nova.network.os_vif_util [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.356 233728 DEBUG nova.network.os_vif_util [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.357 233728 DEBUG nova.objects.instance [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:39 np0005539552 podman[271336]: 2025-11-29 08:14:39.360887329 +0000 UTC m=+0.046390777 container died 08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.372 233728 DEBUG nova.virt.libvirt.driver [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <uuid>19e85fae-c57e-409b-95f7-b53ddb4c928e</uuid>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <name>instance-00000055</name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <memory>196608</memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestOtherB-server-1456117084</nova:name>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:14:38</nova:creationTime>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.micro">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:memory>192</nova:memory>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <nova:port uuid="d8b38a34-8274-43e4-8ebd-3924de5c5ba7">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name="serial">19e85fae-c57e-409b-95f7-b53ddb4c928e</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name="uuid">19e85fae-c57e-409b-95f7-b53ddb4c928e</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:de:2f:2f"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <target dev="tapd8b38a34-82"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/console.log" append="off"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:14:39 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:39 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:39 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.373 233728 DEBUG nova.virt.libvirt.driver [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.374 233728 DEBUG nova.virt.libvirt.driver [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.375 233728 DEBUG nova.virt.libvirt.vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.375 233728 DEBUG nova.network.os_vif_util [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.376 233728 DEBUG nova.network.os_vif_util [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.376 233728 DEBUG os_vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.377 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.377 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.377 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.380 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.380 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8b38a34-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.381 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8b38a34-82, col_values=(('external_ids', {'iface-id': 'd8b38a34-8274-43e4-8ebd-3924de5c5ba7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:2f:2f', 'vm-uuid': '19e85fae-c57e-409b-95f7-b53ddb4c928e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.3830] manager: (tapd8b38a34-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.382 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.386 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.388 233728 INFO os_vif [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:14:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay-7064b57ff2beef19a21a05b072b7db42be44bb2108761323c0f787cdec90522e-merged.mount: Deactivated successfully.
Nov 29 03:14:39 np0005539552 podman[271336]: 2025-11-29 08:14:39.407997745 +0000 UTC m=+0.093501193 container cleanup 08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:14:39 np0005539552 systemd[1]: libpod-conmon-08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f.scope: Deactivated successfully.
Nov 29 03:14:39 np0005539552 kernel: tapd8b38a34-82: entered promiscuous mode
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00319|binding|INFO|Claiming lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for this chassis.
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.4489] manager: (tapd8b38a34-82): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00320|binding|INFO|d8b38a34-8274-43e4-8ebd-3924de5c5ba7: Claiming fa:16:3e:de:2f:2f 10.100.0.6
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.448 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 systemd-udevd[271317]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.457 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:2f:2f 10.100.0.6'], port_security=['fa:16:3e:de:2f:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19e85fae-c57e-409b-95f7-b53ddb4c928e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d8b38a34-8274-43e4-8ebd-3924de5c5ba7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.4621] device (tapd8b38a34-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.4637] device (tapd8b38a34-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00321|binding|INFO|Setting lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 ovn-installed in OVS
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00322|binding|INFO|Setting lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 up in Southbound
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.469 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.473 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 systemd-machined[196379]: New machine qemu-36-instance-00000055.
Nov 29 03:14:39 np0005539552 systemd[1]: Started Virtual Machine qemu-36-instance-00000055.
Nov 29 03:14:39 np0005539552 podman[271372]: 2025-11-29 08:14:39.488487197 +0000 UTC m=+0.057952408 container remove 08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.493 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[db86536e-0086-45aa-8926-69bcc971aab8]: (4, ('Sat Nov 29 08:14:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799 (08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f)\n08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f\nSat Nov 29 08:14:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799 (08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f)\n08c400b521030112386a476ace7dd7b5770f1b36a35885f9332b583d7339553f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.495 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0001f4cd-0821-49e0-9938-afdc67f7bf5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.496 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap630c713d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 kernel: tap630c713d-80: left promiscuous mode
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.497 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.510 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.513 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4f09af1f-ed41-43c6-b46c-e06bdd9de890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.527 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d59e339c-6ee1-44bc-b05a-10347587b9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.529 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e44db724-571f-4692-8ed8-4fbe27a9079d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[32f81689-17d1-4a5a-b352-d76d96b148dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711631, 'reachable_time': 33738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271401, 'error': None, 'target': 'ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 systemd[1]: run-netns-ovnmeta\x2d630c713d\x2d8e9c\x2d44d9\x2d9de3\x2dfab9b04bc799.mount: Deactivated successfully.
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.547 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-630c713d-8e9c-44d9-9de3-fab9b04bc799 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.547 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[d31c9238-2c1d-449e-9914-a1c729c33d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.548 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.549 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.560 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3aee5b92-3bd4-497a-99da-afc4f92620b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.561 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.563 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.563 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c39909f-f1cf-47da-90f0-66131594435d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.564 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d750745f-8374-479a-8412-a62cef6de4eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.575 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[604b074f-cf6c-42dc-9fcc-2968e126b1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.597 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[82c81f9c-6a77-45d2-8e16-97566210876b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.637 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9ff8db-cfeb-44bc-8a25-5c6ff3b49784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.6445] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Nov 29 03:14:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:39.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.644 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[75806a10-aa8e-4ad9-9f67-81c480ecd217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.671 233728 DEBUG nova.compute.manager [req-47acdb35-cfd2-4df7-a4ae-fe7862e3db31 req-7aea5a68-b5cb-47eb-a481-5217d234ddb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.671 233728 DEBUG oslo_concurrency.lockutils [req-47acdb35-cfd2-4df7-a4ae-fe7862e3db31 req-7aea5a68-b5cb-47eb-a481-5217d234ddb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.671 233728 DEBUG oslo_concurrency.lockutils [req-47acdb35-cfd2-4df7-a4ae-fe7862e3db31 req-7aea5a68-b5cb-47eb-a481-5217d234ddb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.672 233728 DEBUG oslo_concurrency.lockutils [req-47acdb35-cfd2-4df7-a4ae-fe7862e3db31 req-7aea5a68-b5cb-47eb-a481-5217d234ddb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.672 233728 DEBUG nova.compute.manager [req-47acdb35-cfd2-4df7-a4ae-fe7862e3db31 req-7aea5a68-b5cb-47eb-a481-5217d234ddb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] No waiting events found dispatching network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.672 233728 WARNING nova.compute.manager [req-47acdb35-cfd2-4df7-a4ae-fe7862e3db31 req-7aea5a68-b5cb-47eb-a481-5217d234ddb1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received unexpected event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.678 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[41c68844-5e69-44a3-b2e8-1577493349c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.679 233728 DEBUG nova.compute.manager [req-1e21f61e-90bb-4532-b568-cc546babcff9 req-a1d7ef34-b5df-43ae-945d-0af49c763b00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-unplugged-4d97f024-e964-485a-9511-f23de3e843bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.679 233728 DEBUG oslo_concurrency.lockutils [req-1e21f61e-90bb-4532-b568-cc546babcff9 req-a1d7ef34-b5df-43ae-945d-0af49c763b00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.679 233728 DEBUG oslo_concurrency.lockutils [req-1e21f61e-90bb-4532-b568-cc546babcff9 req-a1d7ef34-b5df-43ae-945d-0af49c763b00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.680 233728 DEBUG oslo_concurrency.lockutils [req-1e21f61e-90bb-4532-b568-cc546babcff9 req-a1d7ef34-b5df-43ae-945d-0af49c763b00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.680 233728 DEBUG nova.compute.manager [req-1e21f61e-90bb-4532-b568-cc546babcff9 req-a1d7ef34-b5df-43ae-945d-0af49c763b00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-unplugged-4d97f024-e964-485a-9511-f23de3e843bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.680 233728 WARNING nova.compute.manager [req-1e21f61e-90bb-4532-b568-cc546babcff9 req-a1d7ef34-b5df-43ae-945d-0af49c763b00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received unexpected event network-vif-unplugged-4d97f024-e964-485a-9511-f23de3e843bd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.685 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[10fa19a1-0cbc-4f51-9af4-26ed4605bd86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.7052] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.710 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[471395f0-ba53-4baf-91ba-f736f080f46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.726 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc40c43b-b903-4ed9-9d92-d82848dc1981]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712539, 'reachable_time': 18101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271428, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.742 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[609a2d7f-75ee-47da-a3fd-8fd8c48562c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712539, 'tstamp': 712539}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271429, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.758 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c438dd38-70e1-4fc7-aa28-a8aad9b77afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712539, 'reachable_time': 18101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271430, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.786 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6337d248-e9b5-4f9c-941e-ef75afa7db9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.846 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8d2812-1e43-4e67-bb78-a90801f1b52c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.847 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.847 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.847 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.849 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:14:39 np0005539552 NetworkManager[48926]: <info>  [1764404079.8505] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.856 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.858 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:39Z|00323|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.859 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.859 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.860 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d0818498-f5e9-4662-898a-f6ae6ce5b7fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.861 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:39.861 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:39 np0005539552 nova_compute[233724]: 2025-11-29 08:14:39.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.133 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404080.133375, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.134 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.136 233728 DEBUG nova.compute.manager [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.140 233728 INFO nova.virt.libvirt.driver [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance rebooted successfully.#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.140 233728 DEBUG nova.compute.manager [None req-dd9fe3cc-022c-4a62-a7c1-ba1e45dd5e06 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.171 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.174 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.228 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.228 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404080.135563, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.228 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:40 np0005539552 podman[271503]: 2025-11-29 08:14:40.242270579 +0000 UTC m=+0.045863123 container create 3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.252 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.254 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:40 np0005539552 systemd[1]: Started libpod-conmon-3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44.scope.
Nov 29 03:14:40 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:14:40 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6267c8dcc00f4655e248ad112e53f442c0def2dccec3ec757607d6f5e136925c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:40 np0005539552 podman[271503]: 2025-11-29 08:14:40.218199452 +0000 UTC m=+0.021792016 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:40 np0005539552 podman[271503]: 2025-11-29 08:14:40.321374914 +0000 UTC m=+0.124967478 container init 3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:14:40 np0005539552 podman[271503]: 2025-11-29 08:14:40.328092825 +0000 UTC m=+0.131685369 container start 3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:14:40 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [NOTICE]   (271522) : New worker (271524) forked
Nov 29 03:14:40 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [NOTICE]   (271522) : Loading success.
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.385 233728 DEBUG oslo_concurrency.lockutils [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.385 233728 DEBUG oslo_concurrency.lockutils [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquired lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.385 233728 DEBUG nova.network.neutron [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:40 np0005539552 nova_compute[233724]: 2025-11-29 08:14:40.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:41.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.708 233728 DEBUG nova.compute.manager [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-deleted-4d97f024-e964-485a-9511-f23de3e843bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.709 233728 INFO nova.compute.manager [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Neutron deleted interface 4d97f024-e964-485a-9511-f23de3e843bd; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.709 233728 DEBUG nova.network.neutron [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.735 233728 DEBUG nova.objects.instance [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'system_metadata' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.760 233728 DEBUG nova.objects.instance [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'flavor' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.792 233728 DEBUG nova.virt.libvirt.vif [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.792 233728 DEBUG nova.network.os_vif_util [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.793 233728 DEBUG nova.network.os_vif_util [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.796 233728 DEBUG nova.virt.libvirt.guest [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.800 233728 DEBUG nova.virt.libvirt.guest [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <name>instance-0000005c</name>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <uuid>c32e74e2-e74f-4877-8130-ad35d31bb992</uuid>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:39</nova:creationTime>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='serial'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='uuid'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk' index='2'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config' index='1'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:91:96:6a'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target dev='tap2dc39626-7a'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c381,c897</label>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c381,c897</imagelabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.802 233728 DEBUG nova.virt.libvirt.guest [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.810 233728 DEBUG nova.virt.libvirt.guest [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6b:85:80"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4d97f024-e9"/></interface>not found in domain: <domain type='kvm' id='34'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <name>instance-0000005c</name>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <uuid>c32e74e2-e74f-4877-8130-ad35d31bb992</uuid>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:39</nova:creationTime>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <memory unit='KiB'>131072</memory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <resource>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <partition>/machine</partition>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </resource>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <sysinfo type='smbios'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='serial'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='uuid'>c32e74e2-e74f-4877-8130-ad35d31bb992</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <boot dev='hd'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <smbios mode='sysinfo'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <vmcoreinfo state='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <feature policy='require' name='x2apic'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <feature policy='require' name='vme'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <clock offset='utc'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <timer name='hpet' present='no'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <on_reboot>restart</on_reboot>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <on_crash>destroy</on_crash>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <disk type='network' device='disk'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk' index='2'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target dev='vda' bus='virtio'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='virtio-disk0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <disk type='network' device='cdrom'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <auth username='openstack'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source protocol='rbd' name='vms/c32e74e2-e74f-4877-8130-ad35d31bb992_disk.config' index='1'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target dev='sda' bus='sata'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <readonly/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='sata0-0-0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pcie.0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='1' port='0x10'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='2' port='0x11'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='3' port='0x12'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='4' port='0x13'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='5' port='0x14'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='6' port='0x15'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='7' port='0x16'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='8' port='0x17'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.8'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='9' port='0x18'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.9'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='10' port='0x19'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.10'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='11' port='0x1a'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.11'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='12' port='0x1b'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.12'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='13' port='0x1c'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.13'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='14' port='0x1d'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.14'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='15' port='0x1e'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.15'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='16' port='0x1f'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.16'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='17' port='0x20'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.17'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='18' port='0x21'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.18'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='19' port='0x22'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.19'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='20' port='0x23'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.20'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='21' port='0x24'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.21'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='22' port='0x25'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.22'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='23' port='0x26'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.23'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='24' port='0x27'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.24'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-root-port'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target chassis='25' port='0x28'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.25'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model name='pcie-pci-bridge'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='pci.26'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='usb'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <controller type='sata' index='0'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='ide'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </controller>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <interface type='ethernet'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <mac address='fa:16:3e:91:96:6a'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target dev='tap2dc39626-7a'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model type='virtio'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <mtu size='1442'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='net0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <serial type='pty'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target type='isa-serial' port='0'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:        <model name='isa-serial'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      </target>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <source path='/dev/pts/1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <log file='/var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992/console.log' append='off'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <target type='serial' port='0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='serial0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </console>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <input type='tablet' bus='usb'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='input0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <input type='mouse' bus='ps2'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='input1'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <input type='keyboard' bus='ps2'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='input2'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </input>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <listen type='address' address='::0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </graphics>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <audio id='1' type='none'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='video0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <watchdog model='itco' action='reset'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='watchdog0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </watchdog>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <memballoon model='virtio'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <stats period='10'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='balloon0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <rng model='virtio'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <alias name='rng0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <label>system_u:system_r:svirt_t:s0:c381,c897</label>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c381,c897</imagelabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <label>+107:+107</label>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </seclabel>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.812 233728 WARNING nova.virt.libvirt.driver [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Detaching interface fa:16:3e:6b:85:80 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4d97f024-e9' not found.#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.813 233728 DEBUG nova.virt.libvirt.vif [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.813 233728 DEBUG nova.network.os_vif_util [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "4d97f024-e964-485a-9511-f23de3e843bd", "address": "fa:16:3e:6b:85:80", "network": {"id": "630c713d-8e9c-44d9-9de3-fab9b04bc799", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1368736182", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.234", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d97f024-e9", "ovs_interfaceid": "4d97f024-e964-485a-9511-f23de3e843bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.814 233728 DEBUG nova.network.os_vif_util [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.814 233728 DEBUG os_vif [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.816 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.816 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d97f024-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.816 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.818 233728 INFO os_vif [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:85:80,bridge_name='br-int',has_traffic_filtering=True,id=4d97f024-e964-485a-9511-f23de3e843bd,network=Network(630c713d-8e9c-44d9-9de3-fab9b04bc799),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d97f024-e9')#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.819 233728 DEBUG nova.virt.libvirt.guest [req-2aab3676-f4f9-463d-ae79-37921e29753f req-1d9f74d7-0d35-4635-98d6-83b6646660f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:name>tempest-device-tagging-server-1574280677</nova:name>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:creationTime>2025-11-29 08:14:41</nova:creationTime>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:flavor name="m1.nano">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:memory>128</nova:memory>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:disk>1</nova:disk>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:swap>0</nova:swap>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:flavor>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:owner>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:user uuid="90573489491c4659ba4a8ccbd6b896a7">tempest-TaggedAttachmentsTest-1178715901-project-member</nova:user>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:project uuid="b5f1f0d72cd0427a8cda48db244caf6c">tempest-TaggedAttachmentsTest-1178715901</nova:project>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:owner>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  <nova:ports>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    <nova:port uuid="2dc39626-7aae-4e0c-a70b-e08d83c9788b">
Nov 29 03:14:41 np0005539552 nova_compute[233724]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:    </nova:port>
Nov 29 03:14:41 np0005539552 nova_compute[233724]:  </nova:ports>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: </nova:instance>
Nov 29 03:14:41 np0005539552 nova_compute[233724]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.827 233728 DEBUG nova.compute.manager [req-9cb72dac-deaf-4765-9718-d3e67e00ddff req-884d95ad-0129-468b-98ac-b480bbd75829 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.828 233728 DEBUG oslo_concurrency.lockutils [req-9cb72dac-deaf-4765-9718-d3e67e00ddff req-884d95ad-0129-468b-98ac-b480bbd75829 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.828 233728 DEBUG oslo_concurrency.lockutils [req-9cb72dac-deaf-4765-9718-d3e67e00ddff req-884d95ad-0129-468b-98ac-b480bbd75829 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.828 233728 DEBUG oslo_concurrency.lockutils [req-9cb72dac-deaf-4765-9718-d3e67e00ddff req-884d95ad-0129-468b-98ac-b480bbd75829 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.828 233728 DEBUG nova.compute.manager [req-9cb72dac-deaf-4765-9718-d3e67e00ddff req-884d95ad-0129-468b-98ac-b480bbd75829 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.829 233728 WARNING nova.compute.manager [req-9cb72dac-deaf-4765-9718-d3e67e00ddff req-884d95ad-0129-468b-98ac-b480bbd75829 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received unexpected event network-vif-plugged-4d97f024-e964-485a-9511-f23de3e843bd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.840 233728 DEBUG nova.compute.manager [req-a87284cc-98fc-4c4d-a690-5014635326ba req-aa4253bb-d4ae-4a83-a831-30debe1e7ba3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.840 233728 DEBUG oslo_concurrency.lockutils [req-a87284cc-98fc-4c4d-a690-5014635326ba req-aa4253bb-d4ae-4a83-a831-30debe1e7ba3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.840 233728 DEBUG oslo_concurrency.lockutils [req-a87284cc-98fc-4c4d-a690-5014635326ba req-aa4253bb-d4ae-4a83-a831-30debe1e7ba3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.841 233728 DEBUG oslo_concurrency.lockutils [req-a87284cc-98fc-4c4d-a690-5014635326ba req-aa4253bb-d4ae-4a83-a831-30debe1e7ba3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.841 233728 DEBUG nova.compute.manager [req-a87284cc-98fc-4c4d-a690-5014635326ba req-aa4253bb-d4ae-4a83-a831-30debe1e7ba3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] No waiting events found dispatching network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:41 np0005539552 nova_compute[233724]: 2025-11-29 08:14:41.841 233728 WARNING nova.compute.manager [req-a87284cc-98fc-4c4d-a690-5014635326ba req-aa4253bb-d4ae-4a83-a831-30debe1e7ba3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received unexpected event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:42 np0005539552 nova_compute[233724]: 2025-11-29 08:14:42.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:42 np0005539552 nova_compute[233724]: 2025-11-29 08:14:42.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:14:42 np0005539552 nova_compute[233724]: 2025-11-29 08:14:42.949 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.111 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.274 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.274 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.275 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.275 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.275 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.276 233728 INFO nova.compute.manager [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Terminating instance#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.277 233728 DEBUG nova.compute.manager [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:43 np0005539552 kernel: tapd8b38a34-82 (unregistering): left promiscuous mode
Nov 29 03:14:43 np0005539552 NetworkManager[48926]: <info>  [1764404083.3138] device (tapd8b38a34-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:43Z|00324|binding|INFO|Releasing lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 from this chassis (sb_readonly=0)
Nov 29 03:14:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:43Z|00325|binding|INFO|Setting lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 down in Southbound
Nov 29 03:14:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:43Z|00326|binding|INFO|Removing iface tapd8b38a34-82 ovn-installed in OVS
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.317 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.319 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.324 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:2f:2f 10.100.0.6'], port_security=['fa:16:3e:de:2f:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19e85fae-c57e-409b-95f7-b53ddb4c928e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d8b38a34-8274-43e4-8ebd-3924de5c5ba7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.326 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.327 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.328 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[39822225-22eb-4980-8d77-e34aefe14ada]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.328 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.372 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 29 03:14:43 np0005539552 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000055.scope: Consumed 3.953s CPU time.
Nov 29 03:14:43 np0005539552 systemd-machined[196379]: Machine qemu-36-instance-00000055 terminated.
Nov 29 03:14:43 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [NOTICE]   (271522) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:43 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [NOTICE]   (271522) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:43 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [WARNING]  (271522) : Exiting Master process...
Nov 29 03:14:43 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [ALERT]    (271522) : Current worker (271524) exited with code 143 (Terminated)
Nov 29 03:14:43 np0005539552 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[271518]: [WARNING]  (271522) : All workers exited. Exiting... (0)
Nov 29 03:14:43 np0005539552 systemd[1]: libpod-3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44.scope: Deactivated successfully.
Nov 29 03:14:43 np0005539552 conmon[271518]: conmon 3353e2738588e699080e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44.scope/container/memory.events
Nov 29 03:14:43 np0005539552 podman[271559]: 2025-11-29 08:14:43.448932631 +0000 UTC m=+0.039582885 container died 3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:14:43 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:43 np0005539552 systemd[1]: var-lib-containers-storage-overlay-6267c8dcc00f4655e248ad112e53f442c0def2dccec3ec757607d6f5e136925c-merged.mount: Deactivated successfully.
Nov 29 03:14:43 np0005539552 podman[271559]: 2025-11-29 08:14:43.489392328 +0000 UTC m=+0.080042582 container cleanup 3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 systemd[1]: libpod-conmon-3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44.scope: Deactivated successfully.
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.508 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.513 233728 INFO nova.virt.libvirt.driver [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance destroyed successfully.#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.513 233728 DEBUG nova.objects.instance [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.531 233728 DEBUG nova.virt.libvirt.vif [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.531 233728 DEBUG nova.network.os_vif_util [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.532 233728 DEBUG nova.network.os_vif_util [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.532 233728 DEBUG os_vif [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.533 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.533 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8b38a34-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.535 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.536 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.539 233728 INFO os_vif [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:14:43 np0005539552 podman[271593]: 2025-11-29 08:14:43.566435248 +0000 UTC m=+0.046107190 container remove 3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.572 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[00c95e4f-95f5-4f79-b77e-85611bdc1322]: (4, ('Sat Nov 29 08:14:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44)\n3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44\nSat Nov 29 08:14:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44)\n3353e2738588e699080ef268859d19c914b1a99fdc076fd3d188afcbe17d4f44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.575 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9a581c-c6ce-4b9a-a23c-f693ee98bd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.577 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:43 np0005539552 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.579 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.595 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.598 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ec568fa4-307a-4398-8d68-d2b5b4fc14ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.613 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[397f2ec4-870f-440d-8210-c8089b9bcf5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.614 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffb4516-2609-4205-9dd0-79c408e2eb41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.629 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41ecbcc4-8f5a-4728-9af9-aa6762087157]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712532, 'reachable_time': 30622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271633, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.632 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:43.632 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9faef618-1626-4e35-8b0c-4ad7f224dcc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539552 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:14:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:43.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.730 233728 INFO nova.network.neutron [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Port 4d97f024-e964-485a-9511-f23de3e843bd from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.731 233728 DEBUG nova.network.neutron [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [{"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.759 233728 DEBUG oslo_concurrency.lockutils [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Releasing lock "refresh_cache-c32e74e2-e74f-4877-8130-ad35d31bb992" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.798 233728 DEBUG oslo_concurrency.lockutils [None req-125d696b-2924-4c52-af6f-d4d960a92746 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "interface-c32e74e2-e74f-4877-8130-ad35d31bb992-4d97f024-e964-485a-9511-f23de3e843bd" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.930 233728 INFO nova.virt.libvirt.driver [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Deleting instance files /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e_del#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.931 233728 INFO nova.virt.libvirt.driver [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Deletion of /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e_del complete#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.990 233728 INFO nova.compute.manager [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.991 233728 DEBUG oslo.service.loopingcall [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.991 233728 DEBUG nova.compute.manager [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:43 np0005539552 nova_compute[233724]: 2025-11-29 08:14:43.991 233728 DEBUG nova.network.neutron [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.812 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.813 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.813 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.813 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.813 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.815 233728 INFO nova.compute.manager [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Terminating instance#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.816 233728 DEBUG nova.compute.manager [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.861 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:44 np0005539552 kernel: tap2dc39626-7a (unregistering): left promiscuous mode
Nov 29 03:14:44 np0005539552 NetworkManager[48926]: <info>  [1764404084.8785] device (tap2dc39626-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:44Z|00327|binding|INFO|Releasing lport 2dc39626-7aae-4e0c-a70b-e08d83c9788b from this chassis (sb_readonly=0)
Nov 29 03:14:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:44Z|00328|binding|INFO|Setting lport 2dc39626-7aae-4e0c-a70b-e08d83c9788b down in Southbound
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:44Z|00329|binding|INFO|Removing iface tap2dc39626-7a ovn-installed in OVS
Nov 29 03:14:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:44.889 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:96:6a 10.100.0.12'], port_security=['fa:16:3e:91:96:6a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c32e74e2-e74f-4877-8130-ad35d31bb992', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5f1f0d72cd0427a8cda48db244caf6c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '748545ab-9e1c-469f-b9ff-83c86c5e92e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb550e5a-4fea-4245-9311-17a32f690e26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2dc39626-7aae-4e0c-a70b-e08d83c9788b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:44.890 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2dc39626-7aae-4e0c-a70b-e08d83c9788b in datapath 88cc8f67-0d68-413a-b508-63fae18f1c0c unbound from our chassis#033[00m
Nov 29 03:14:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:44.891 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88cc8f67-0d68-413a-b508-63fae18f1c0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:44.892 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3c56c7-a026-4c3d-81dc-94181cdfaa58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:44.892 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c namespace which is not needed anymore#033[00m
Nov 29 03:14:44 np0005539552 nova_compute[233724]: 2025-11-29 08:14:44.910 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:44 np0005539552 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 29 03:14:44 np0005539552 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000005c.scope: Consumed 15.989s CPU time.
Nov 29 03:14:44 np0005539552 systemd-machined[196379]: Machine qemu-34-instance-0000005c terminated.
Nov 29 03:14:45 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [NOTICE]   (270352) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:45 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [NOTICE]   (270352) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:45 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [WARNING]  (270352) : Exiting Master process...
Nov 29 03:14:45 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [ALERT]    (270352) : Current worker (270354) exited with code 143 (Terminated)
Nov 29 03:14:45 np0005539552 neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c[270319]: [WARNING]  (270352) : All workers exited. Exiting... (0)
Nov 29 03:14:45 np0005539552 systemd[1]: libpod-b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b.scope: Deactivated successfully.
Nov 29 03:14:45 np0005539552 podman[271658]: 2025-11-29 08:14:45.029867455 +0000 UTC m=+0.048688419 container died b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.049 233728 INFO nova.virt.libvirt.driver [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Instance destroyed successfully.#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.050 233728 DEBUG nova.objects.instance [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lazy-loading 'resources' on Instance uuid c32e74e2-e74f-4877-8130-ad35d31bb992 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f40401ac65f88f73496c54958b7c9ab5eb1bed665149534e9f63c76fea2cdf48-merged.mount: Deactivated successfully.
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.061 233728 DEBUG nova.compute.manager [req-5f643c0e-3e1f-44c9-9c6e-69ae9b7e3005 req-67f065de-4c24-44c7-af1d-ab3c93a5de30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-unplugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.062 233728 DEBUG oslo_concurrency.lockutils [req-5f643c0e-3e1f-44c9-9c6e-69ae9b7e3005 req-67f065de-4c24-44c7-af1d-ab3c93a5de30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.063 233728 DEBUG oslo_concurrency.lockutils [req-5f643c0e-3e1f-44c9-9c6e-69ae9b7e3005 req-67f065de-4c24-44c7-af1d-ab3c93a5de30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.063 233728 DEBUG oslo_concurrency.lockutils [req-5f643c0e-3e1f-44c9-9c6e-69ae9b7e3005 req-67f065de-4c24-44c7-af1d-ab3c93a5de30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.063 233728 DEBUG nova.compute.manager [req-5f643c0e-3e1f-44c9-9c6e-69ae9b7e3005 req-67f065de-4c24-44c7-af1d-ab3c93a5de30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-unplugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.063 233728 DEBUG nova.compute.manager [req-5f643c0e-3e1f-44c9-9c6e-69ae9b7e3005 req-67f065de-4c24-44c7-af1d-ab3c93a5de30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-unplugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.065 233728 DEBUG nova.virt.libvirt.vif [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1574280677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1574280677',id=92,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1erYbCXqwQQYhX/uR9pDNm/1t/pGAfklA44NJhGMIL8E6zP+f6ImuAmBaXR5JpOEpRJertjSkrs1uRBMGT5Sn9Wu4jIvd5fp8AA7ZdhfA39Eye8MjgGqmthA4Ol0v56A==',key_name='tempest-keypair-306133375',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5f1f0d72cd0427a8cda48db244caf6c',ramdisk_id='',reservation_id='r-1l34dvi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1178715901',owner_user_name='tempest-TaggedAttachmentsTest-1178715901-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90573489491c4659ba4a8ccbd6b896a7',uuid=c32e74e2-e74f-4877-8130-ad35d31bb992,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.066 233728 DEBUG nova.network.os_vif_util [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converting VIF {"id": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "address": "fa:16:3e:91:96:6a", "network": {"id": "88cc8f67-0d68-413a-b508-63fae18f1c0c", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-374579760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5f1f0d72cd0427a8cda48db244caf6c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dc39626-7a", "ovs_interfaceid": "2dc39626-7aae-4e0c-a70b-e08d83c9788b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.066 233728 DEBUG nova.network.os_vif_util [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.067 233728 DEBUG os_vif [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.068 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.068 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dc39626-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.069 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.071 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539552 podman[271658]: 2025-11-29 08:14:45.07248727 +0000 UTC m=+0.091308264 container cleanup b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.073 233728 INFO os_vif [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:96:6a,bridge_name='br-int',has_traffic_filtering=True,id=2dc39626-7aae-4e0c-a70b-e08d83c9788b,network=Network(88cc8f67-0d68-413a-b508-63fae18f1c0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dc39626-7a')#033[00m
Nov 29 03:14:45 np0005539552 systemd[1]: libpod-conmon-b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b.scope: Deactivated successfully.
Nov 29 03:14:45 np0005539552 podman[271707]: 2025-11-29 08:14:45.13352631 +0000 UTC m=+0.035305650 container remove b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.140 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2f7846-6f5f-4e4f-9331-cbdb3bb198a9]: (4, ('Sat Nov 29 08:14:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c (b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b)\nb7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b\nSat Nov 29 08:14:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c (b7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b)\nb7d57b4094194c2d19dc0cf787e6676050471a5a8f8a1be05aedf73cff689e5b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.142 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fc43cfe0-550c-4206-8d1c-d39dcb567e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.142 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88cc8f67-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:45 np0005539552 kernel: tap88cc8f67-00: left promiscuous mode
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.144 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.157 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.160 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb63808-09a0-47a4-b469-05a9dfeb6530]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.175 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[232a3676-5d9f-4c1b-b7b3-f17277648dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.175 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[af5d7bf8-9c75-4323-a5bb-61c612db0f70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.191 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5b38ed-872b-4781-bfdd-27149b46bfae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708054, 'reachable_time': 20742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271733, 'error': None, 'target': 'ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.192 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88cc8f67-0d68-413a-b508-63fae18f1c0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:45.193 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[0162fd59-5526-45a9-aac8-be37d4ab9034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:45 np0005539552 systemd[1]: run-netns-ovnmeta\x2d88cc8f67\x2d0d68\x2d413a\x2db508\x2d63fae18f1c0c.mount: Deactivated successfully.
Nov 29 03:14:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:45.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.471 233728 INFO nova.virt.libvirt.driver [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Deleting instance files /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992_del#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.473 233728 INFO nova.virt.libvirt.driver [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Deletion of /var/lib/nova/instances/c32e74e2-e74f-4877-8130-ad35d31bb992_del complete#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.564 233728 INFO nova.compute.manager [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.565 233728 DEBUG oslo.service.loopingcall [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.566 233728 DEBUG nova.compute.manager [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.566 233728 DEBUG nova.network.neutron [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:45.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.684 233728 DEBUG nova.network.neutron [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.717 233728 INFO nova.compute.manager [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 1.73 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.775 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.776 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.872 233728 DEBUG oslo_concurrency.processutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:45 np0005539552 nova_compute[233724]: 2025-11-29 08:14:45.990 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1236166695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.320 233728 DEBUG oslo_concurrency.processutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.326 233728 DEBUG nova.compute.provider_tree [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.341 233728 DEBUG nova.scheduler.client.report [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.372 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.395 233728 INFO nova.scheduler.client.report [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocations for instance 19e85fae-c57e-409b-95f7-b53ddb4c928e#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.504 233728 DEBUG oslo_concurrency.lockutils [None req-3a87bb33-4b31-498a-84c6-8a2e8785f5c7 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.853 233728 DEBUG nova.network.neutron [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.878 233728 INFO nova.compute.manager [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Took 1.31 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.926 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.926 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:46 np0005539552 nova_compute[233724]: 2025-11-29 08:14:46.990 233728 DEBUG oslo_concurrency.processutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.036 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.156 233728 DEBUG nova.compute.manager [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.156 233728 DEBUG oslo_concurrency.lockutils [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.157 233728 DEBUG oslo_concurrency.lockutils [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.157 233728 DEBUG oslo_concurrency.lockutils [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.157 233728 DEBUG nova.compute.manager [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] No waiting events found dispatching network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.157 233728 WARNING nova.compute.manager [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received unexpected event network-vif-plugged-2dc39626-7aae-4e0c-a70b-e08d83c9788b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.158 233728 DEBUG nova.compute.manager [req-3aa96c23-1ee3-43fa-9de3-9a4222ceb0a8 req-d0d89a92-1185-4bb9-8404-d7de0288ea4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Received event network-vif-deleted-2dc39626-7aae-4e0c-a70b-e08d83c9788b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.254 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.255 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.255 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.255 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.255 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.257 233728 INFO nova.compute.manager [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Terminating instance#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.257 233728 DEBUG nova.compute.manager [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:47 np0005539552 kernel: tap1d12c166-4c (unregistering): left promiscuous mode
Nov 29 03:14:47 np0005539552 NetworkManager[48926]: <info>  [1764404087.3216] device (tap1d12c166-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:47Z|00330|binding|INFO|Releasing lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 from this chassis (sb_readonly=0)
Nov 29 03:14:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:47Z|00331|binding|INFO|Setting lport 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 down in Southbound
Nov 29 03:14:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:14:47Z|00332|binding|INFO|Removing iface tap1d12c166-4c ovn-installed in OVS
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.329 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.336 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:af:cf 10.100.0.10'], port_security=['fa:16:3e:b2:af:cf 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '36048c92-5df2-425d-b12f-1ce0326cc6a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1d12c166-4cae-49ec-ab9b-149d65ceb0b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.338 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1d12c166-4cae-49ec-ab9b-149d65ceb0b6 in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 unbound from our chassis#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.341 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ce08321-9ca9-47d5-b99b-65a439440787, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.349 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8392a8fd-e7f7-481b-a9f7-abbd00674240]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.350 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace which is not needed anymore#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.350 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 29 03:14:47 np0005539552 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000005a.scope: Consumed 14.814s CPU time.
Nov 29 03:14:47 np0005539552 systemd-machined[196379]: Machine qemu-35-instance-0000005a terminated.
Nov 29 03:14:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2074242109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.430 233728 DEBUG oslo_concurrency.processutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.435 233728 DEBUG nova.compute.provider_tree [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.457 233728 DEBUG nova.scheduler.client.report [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:47 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [NOTICE]   (270983) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:47 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [NOTICE]   (270983) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:47 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [WARNING]  (270983) : Exiting Master process...
Nov 29 03:14:47 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [ALERT]    (270983) : Current worker (270985) exited with code 143 (Terminated)
Nov 29 03:14:47 np0005539552 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[270978]: [WARNING]  (270983) : All workers exited. Exiting... (0)
Nov 29 03:14:47 np0005539552 systemd[1]: libpod-95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8.scope: Deactivated successfully.
Nov 29 03:14:47 np0005539552 podman[271805]: 2025-11-29 08:14:47.484602266 +0000 UTC m=+0.056102319 container died 95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.485 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.495 233728 INFO nova.virt.libvirt.driver [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Instance destroyed successfully.#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.496 233728 DEBUG nova.objects.instance [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'resources' on Instance uuid 36048c92-5df2-425d-b12f-1ce0326cc6a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.509 233728 DEBUG nova.virt.libvirt.vif [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-607468552',display_name='tempest-ListServerFiltersTestJSON-instance-607468552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-607468552',id=90,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-angclfpm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:25Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=36048c92-5df2-425d-b12f-1ce0326cc6a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.509 233728 DEBUG nova.network.os_vif_util [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "address": "fa:16:3e:b2:af:cf", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d12c166-4c", "ovs_interfaceid": "1d12c166-4cae-49ec-ab9b-149d65ceb0b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.510 233728 DEBUG nova.network.os_vif_util [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.510 233728 DEBUG os_vif [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.512 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d12c166-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.517 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:47 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.519 233728 INFO os_vif [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:af:cf,bridge_name='br-int',has_traffic_filtering=True,id=1d12c166-4cae-49ec-ab9b-149d65ceb0b6,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d12c166-4c')#033[00m
Nov 29 03:14:47 np0005539552 systemd[1]: var-lib-containers-storage-overlay-596e38491023d84493ca2f51c4ed9be876969181fdbe903392d31bdbeee31bb2-merged.mount: Deactivated successfully.
Nov 29 03:14:47 np0005539552 podman[271805]: 2025-11-29 08:14:47.530335563 +0000 UTC m=+0.101835556 container cleanup 95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:47 np0005539552 systemd[1]: libpod-conmon-95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8.scope: Deactivated successfully.
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.541 233728 INFO nova.scheduler.client.report [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Deleted allocations for instance c32e74e2-e74f-4877-8130-ad35d31bb992#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.569 233728 DEBUG nova.compute.manager [req-d8f5533d-2677-41ca-88bd-5c7b293ce59e req-f3399946-7e74-479d-bfb8-0fb9621fdf40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-unplugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.569 233728 DEBUG oslo_concurrency.lockutils [req-d8f5533d-2677-41ca-88bd-5c7b293ce59e req-f3399946-7e74-479d-bfb8-0fb9621fdf40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.570 233728 DEBUG oslo_concurrency.lockutils [req-d8f5533d-2677-41ca-88bd-5c7b293ce59e req-f3399946-7e74-479d-bfb8-0fb9621fdf40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.570 233728 DEBUG oslo_concurrency.lockutils [req-d8f5533d-2677-41ca-88bd-5c7b293ce59e req-f3399946-7e74-479d-bfb8-0fb9621fdf40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.570 233728 DEBUG nova.compute.manager [req-d8f5533d-2677-41ca-88bd-5c7b293ce59e req-f3399946-7e74-479d-bfb8-0fb9621fdf40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-unplugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.570 233728 DEBUG nova.compute.manager [req-d8f5533d-2677-41ca-88bd-5c7b293ce59e req-f3399946-7e74-479d-bfb8-0fb9621fdf40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-unplugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:47 np0005539552 podman[271862]: 2025-11-29 08:14:47.599119991 +0000 UTC m=+0.046538091 container remove 95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.604 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f575be15-83cb-4bac-9a4c-a9fd9fb45afc]: (4, ('Sat Nov 29 08:14:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8)\n95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8\nSat Nov 29 08:14:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8)\n95e13795805fa847feb1e9226fdfecabd90f9c8d7b736835e369c0ece9bdf9b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.605 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fe41b051-6f77-4a3b-b8b3-970b2e750edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.606 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 kernel: tap5ce08321-90: left promiscuous mode
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.613 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae3be8a-0de8-489d-9b38-9c0d5e26a13f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.626 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.629 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b857859-10a4-47e4-a40b-6b4d843dfa71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.630 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[704b16b0-f990-443c-9459-90ac031df9f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.645 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c246743c-a335-4c79-8417-cbf42b319855]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711082, 'reachable_time': 18786, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271880, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 systemd[1]: run-netns-ovnmeta\x2d5ce08321\x2d9ca9\x2d47d5\x2db99b\x2d65a439440787.mount: Deactivated successfully.
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.647 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:14:47.647 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[d48d6ee3-c19a-4513-8fc9-b5914c4260ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.650 233728 DEBUG oslo_concurrency.lockutils [None req-d5714181-393f-4f99-b5c1-6b417a915903 90573489491c4659ba4a8ccbd6b896a7 b5f1f0d72cd0427a8cda48db244caf6c - - default default] Lock "c32e74e2-e74f-4877-8130-ad35d31bb992" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.687 233728 DEBUG nova.compute.manager [req-a865ded9-26fb-46c1-9954-52c7e799b027 req-2c27dc96-f3cf-4d86-acca-3e281820736b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-deleted-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.906 233728 INFO nova.virt.libvirt.driver [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Deleting instance files /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1_del#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.907 233728 INFO nova.virt.libvirt.driver [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Deletion of /var/lib/nova/instances/36048c92-5df2-425d-b12f-1ce0326cc6a1_del complete#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.962 233728 INFO nova.compute.manager [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.962 233728 DEBUG oslo.service.loopingcall [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.963 233728 DEBUG nova.compute.manager [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:47 np0005539552 nova_compute[233724]: 2025-11-29 08:14:47.963 233728 DEBUG nova.network.neutron [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.191 233728 DEBUG nova.network.neutron [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.209 233728 INFO nova.compute.manager [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.251 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.252 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.294 233728 DEBUG oslo_concurrency.processutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:49.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.700 233728 DEBUG nova.compute.manager [req-30e22b44-e72e-4835-81e5-3d6c49170209 req-f8e87ed9-bcfc-498b-a056-5af790777445 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.701 233728 DEBUG oslo_concurrency.lockutils [req-30e22b44-e72e-4835-81e5-3d6c49170209 req-f8e87ed9-bcfc-498b-a056-5af790777445 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.701 233728 DEBUG oslo_concurrency.lockutils [req-30e22b44-e72e-4835-81e5-3d6c49170209 req-f8e87ed9-bcfc-498b-a056-5af790777445 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.701 233728 DEBUG oslo_concurrency.lockutils [req-30e22b44-e72e-4835-81e5-3d6c49170209 req-f8e87ed9-bcfc-498b-a056-5af790777445 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.701 233728 DEBUG nova.compute.manager [req-30e22b44-e72e-4835-81e5-3d6c49170209 req-f8e87ed9-bcfc-498b-a056-5af790777445 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] No waiting events found dispatching network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.702 233728 WARNING nova.compute.manager [req-30e22b44-e72e-4835-81e5-3d6c49170209 req-f8e87ed9-bcfc-498b-a056-5af790777445 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received unexpected event network-vif-plugged-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:14:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1424800365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.728 233728 DEBUG oslo_concurrency.processutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.733 233728 DEBUG nova.compute.provider_tree [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.752 233728 DEBUG nova.scheduler.client.report [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.802 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.829 233728 INFO nova.scheduler.client.report [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Deleted allocations for instance 36048c92-5df2-425d-b12f-1ce0326cc6a1#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.934 233728 DEBUG oslo_concurrency.lockutils [None req-d260eba3-f928-44b7-af24-fbac41e4621f 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "36048c92-5df2-425d-b12f-1ce0326cc6a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:49 np0005539552 nova_compute[233724]: 2025-11-29 08:14:49.946 233728 DEBUG nova.compute.manager [req-30671f17-5ce3-4137-9fbf-c86dbd9a4971 req-4000d8e4-21ba-43bf-9149-d7f8ee76079c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Received event network-vif-deleted-1d12c166-4cae-49ec-ab9b-149d65ceb0b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:51 np0005539552 nova_compute[233724]: 2025-11-29 08:14:51.040 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:51.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:51 np0005539552 nova_compute[233724]: 2025-11-29 08:14:51.305 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:51 np0005539552 nova_compute[233724]: 2025-11-29 08:14:51.575 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:52 np0005539552 nova_compute[233724]: 2025-11-29 08:14:52.515 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:14:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572610875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:14:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:14:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572610875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:14:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:55.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:56 np0005539552 nova_compute[233724]: 2025-11-29 08:14:56.042 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:57.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:57 np0005539552 nova_compute[233724]: 2025-11-29 08:14:57.517 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:57 np0005539552 podman[272093]: 2025-11-29 08:14:57.994670476 +0000 UTC m=+0.065189603 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:14:58 np0005539552 podman[272092]: 2025-11-29 08:14:58.008730223 +0000 UTC m=+0.078647724 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:58 np0005539552 podman[272094]: 2025-11-29 08:14:58.072502797 +0000 UTC m=+0.143869417 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 03:14:58 np0005539552 nova_compute[233724]: 2025-11-29 08:14:58.511 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404083.5104556, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:58 np0005539552 nova_compute[233724]: 2025-11-29 08:14:58.512 233728 INFO nova.compute.manager [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:58 np0005539552 nova_compute[233724]: 2025-11-29 08:14:58.541 233728 DEBUG nova.compute.manager [None req-f05d0bf3-a366-4014-ad48-ad1f9f94862f - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:14:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:59.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:00 np0005539552 nova_compute[233724]: 2025-11-29 08:15:00.049 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404085.0479236, c32e74e2-e74f-4877-8130-ad35d31bb992 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:00 np0005539552 nova_compute[233724]: 2025-11-29 08:15:00.050 233728 INFO nova.compute.manager [-] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:00 np0005539552 nova_compute[233724]: 2025-11-29 08:15:00.072 233728 DEBUG nova.compute.manager [None req-1309e515-f55e-4669-aa87-5a534a534c16 - - - - - -] [instance: c32e74e2-e74f-4877-8130-ad35d31bb992] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:00 np0005539552 nova_compute[233724]: 2025-11-29 08:15:00.360 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:00.360 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:00.361 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:15:01 np0005539552 nova_compute[233724]: 2025-11-29 08:15:01.044 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:01.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:01.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:02 np0005539552 nova_compute[233724]: 2025-11-29 08:15:02.494 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404087.4933896, 36048c92-5df2-425d-b12f-1ce0326cc6a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:02 np0005539552 nova_compute[233724]: 2025-11-29 08:15:02.495 233728 INFO nova.compute.manager [-] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:02 np0005539552 nova_compute[233724]: 2025-11-29 08:15:02.530 233728 DEBUG nova.compute.manager [None req-73c32ee8-c96a-431d-8556-a8557b11ef80 - - - - - -] [instance: 36048c92-5df2-425d-b12f-1ce0326cc6a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:02 np0005539552 nova_compute[233724]: 2025-11-29 08:15:02.560 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.601279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103601919, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1321, "num_deletes": 252, "total_data_size": 2878307, "memory_usage": 2914416, "flush_reason": "Manual Compaction"}
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103617404, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 1857339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42480, "largest_seqno": 43796, "table_properties": {"data_size": 1851536, "index_size": 3070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13174, "raw_average_key_size": 20, "raw_value_size": 1839648, "raw_average_value_size": 2856, "num_data_blocks": 135, "num_entries": 644, "num_filter_entries": 644, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404011, "oldest_key_time": 1764404011, "file_creation_time": 1764404103, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 15683 microseconds, and 8225 cpu microseconds.
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.617476) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 1857339 bytes OK
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.617501) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.619094) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.619119) EVENT_LOG_v1 {"time_micros": 1764404103619112, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.619142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2871928, prev total WAL file size 2871928, number of live WAL files 2.
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.620407) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(1813KB)], [81(10109KB)]
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103620460, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 12209782, "oldest_snapshot_seqno": -1}
Nov 29 03:15:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 7416 keys, 10269546 bytes, temperature: kUnknown
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103677565, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 10269546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10221436, "index_size": 28478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 192319, "raw_average_key_size": 25, "raw_value_size": 10089942, "raw_average_value_size": 1360, "num_data_blocks": 1118, "num_entries": 7416, "num_filter_entries": 7416, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404103, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.677801) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10269546 bytes
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.679258) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.5 rd, 179.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.9 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 7939, records dropped: 523 output_compression: NoCompression
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.679274) EVENT_LOG_v1 {"time_micros": 1764404103679266, "job": 50, "event": "compaction_finished", "compaction_time_micros": 57193, "compaction_time_cpu_micros": 24256, "output_level": 6, "num_output_files": 1, "total_output_size": 10269546, "num_input_records": 7939, "num_output_records": 7416, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103679658, "job": 50, "event": "table_file_deletion", "file_number": 83}
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103681219, "job": 50, "event": "table_file_deletion", "file_number": 81}
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.620285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.681274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.681281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.681283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.681284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:15:03.681286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:05.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:05.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:06 np0005539552 nova_compute[233724]: 2025-11-29 08:15:06.084 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:07.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:07.363 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:07 np0005539552 nova_compute[233724]: 2025-11-29 08:15:07.564 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:07.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:15:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 8334 writes, 43K keys, 8334 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8334 writes, 8334 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1802 writes, 9404 keys, 1802 commit groups, 1.0 writes per commit group, ingest: 17.58 MB, 0.03 MB/s#012Interval WAL: 1802 writes, 1802 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.8      2.59              0.16        25    0.103       0      0       0.0       0.0#012  L6      1/0    9.79 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5     79.0     67.0      3.65              0.63        24    0.152    147K    13K       0.0       0.0#012 Sum      1/0    9.79 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5     46.2     47.8      6.23              0.79        49    0.127    147K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7    120.4    119.2      0.72              0.22        14    0.051     53K   4112       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     79.0     67.0      3.65              0.63        24    0.152    147K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     21.2      2.54              0.16        24    0.106       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.1 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 6.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 31.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000236 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1790,30.46 MB,10.0213%) FilterBlock(49,414.30 KB,0.133088%) IndexBlock(49,736.56 KB,0.236612%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:15:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:10 np0005539552 nova_compute[233724]: 2025-11-29 08:15:10.953 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:10 np0005539552 nova_compute[233724]: 2025-11-29 08:15:10.982 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:10 np0005539552 nova_compute[233724]: 2025-11-29 08:15:10.982 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:10 np0005539552 nova_compute[233724]: 2025-11-29 08:15:10.996 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.080 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.080 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.086 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.090 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.090 233728 INFO nova.compute.claims [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.201 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:11.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1857724748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.632 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.639 233728 DEBUG nova.compute.provider_tree [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.653 233728 DEBUG nova.scheduler.client.report [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.680 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.681 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:11.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.735 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.736 233728 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.763 233728 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.789 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.896 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.898 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.898 233728 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Creating image(s)#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.927 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.962 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.988 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:11 np0005539552 nova_compute[233724]: 2025-11-29 08:15:11.991 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.055 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.056 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.056 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.057 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.082 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.085 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 27ab9d23-2828-4fea-8ff9-99751c744ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.126 233728 DEBUG nova.policy [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0bd9df09f1324e3f9dba099f03ffe1c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '647c3591c2b940409293763c6c83e58c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.567 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.817 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 27ab9d23-2828-4fea-8ff9-99751c744ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:12 np0005539552 nova_compute[233724]: 2025-11-29 08:15:12.914 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] resizing rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.002 233728 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Successfully created port: 1ae85427-ec76-4ddf-ba54-fa21a4d16202 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.111 233728 DEBUG nova.objects.instance [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'migration_context' on Instance uuid 27ab9d23-2828-4fea-8ff9-99751c744ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.125 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.125 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Ensure instance console log exists: /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.126 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.126 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:13 np0005539552 nova_compute[233724]: 2025-11-29 08:15:13.127 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:13.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:13.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:15.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:15.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.087 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.105 233728 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Successfully updated port: 1ae85427-ec76-4ddf-ba54-fa21a4d16202 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.127 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "refresh_cache-27ab9d23-2828-4fea-8ff9-99751c744ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.128 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquired lock "refresh_cache-27ab9d23-2828-4fea-8ff9-99751c744ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.128 233728 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.209 233728 DEBUG nova.compute.manager [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-changed-1ae85427-ec76-4ddf-ba54-fa21a4d16202 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.209 233728 DEBUG nova.compute.manager [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Refreshing instance network info cache due to event network-changed-1ae85427-ec76-4ddf-ba54-fa21a4d16202. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.209 233728 DEBUG oslo_concurrency.lockutils [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-27ab9d23-2828-4fea-8ff9-99751c744ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:16 np0005539552 nova_compute[233724]: 2025-11-29 08:15:16.893 233728 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:17.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.569 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.802 233728 DEBUG nova.network.neutron [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Updating instance_info_cache with network_info: [{"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.832 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Releasing lock "refresh_cache-27ab9d23-2828-4fea-8ff9-99751c744ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.832 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Instance network_info: |[{"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.833 233728 DEBUG oslo_concurrency.lockutils [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-27ab9d23-2828-4fea-8ff9-99751c744ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.833 233728 DEBUG nova.network.neutron [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Refreshing network info cache for port 1ae85427-ec76-4ddf-ba54-fa21a4d16202 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.836 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Start _get_guest_xml network_info=[{"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.840 233728 WARNING nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.844 233728 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.844 233728 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.848 233728 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.849 233728 DEBUG nova.virt.libvirt.host [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.850 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.850 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.850 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.850 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.851 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.851 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.851 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.851 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.851 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.852 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.852 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.852 233728 DEBUG nova.virt.hardware [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:17 np0005539552 nova_compute[233724]: 2025-11-29 08:15:17.855 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1638222891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.284 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.307 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.310 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1178258888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.739 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.741 233728 DEBUG nova.virt.libvirt.vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1895741234',display_name='tempest-tempest.common.compute-instance-1895741234-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1895741234-1',id=97,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-vw058oln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:11Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=27ab9d23-2828-4fea-8ff9-99751c744ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.741 233728 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.742 233728 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.743 233728 DEBUG nova.objects.instance [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 27ab9d23-2828-4fea-8ff9-99751c744ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.761 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <uuid>27ab9d23-2828-4fea-8ff9-99751c744ded</uuid>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <name>instance-00000061</name>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:name>tempest-tempest.common.compute-instance-1895741234-1</nova:name>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:15:17</nova:creationTime>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:user uuid="0bd9df09f1324e3f9dba099f03ffe1c6">tempest-MultipleCreateTestJSON-2058984420-project-member</nova:user>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:project uuid="647c3591c2b940409293763c6c83e58c">tempest-MultipleCreateTestJSON-2058984420</nova:project>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <nova:port uuid="1ae85427-ec76-4ddf-ba54-fa21a4d16202">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <entry name="serial">27ab9d23-2828-4fea-8ff9-99751c744ded</entry>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <entry name="uuid">27ab9d23-2828-4fea-8ff9-99751c744ded</entry>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/27ab9d23-2828-4fea-8ff9-99751c744ded_disk">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/27ab9d23-2828-4fea-8ff9-99751c744ded_disk.config">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:47:78:01"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <target dev="tap1ae85427-ec"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/console.log" append="off"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:15:18 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:15:18 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:15:18 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:15:18 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.763 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Preparing to wait for external event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.764 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.764 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.764 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.765 233728 DEBUG nova.virt.libvirt.vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1895741234',display_name='tempest-tempest.common.compute-instance-1895741234-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1895741234-1',id=97,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-vw058oln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:11Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=27ab9d23-2828-4fea-8ff9-99751c744ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.766 233728 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.766 233728 DEBUG nova.network.os_vif_util [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.767 233728 DEBUG os_vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.767 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.768 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.769 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.772 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.773 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ae85427-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.773 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ae85427-ec, col_values=(('external_ids', {'iface-id': '1ae85427-ec76-4ddf-ba54-fa21a4d16202', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:78:01', 'vm-uuid': '27ab9d23-2828-4fea-8ff9-99751c744ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.775 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539552 NetworkManager[48926]: <info>  [1764404118.7757] manager: (tap1ae85427-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.776 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539552 nova_compute[233724]: 2025-11-29 08:15:18.781 233728 INFO os_vif [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec')#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.053 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.053 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.054 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No VIF found with MAC fa:16:3e:47:78:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.054 233728 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Using config drive#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.081 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:19.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.696 233728 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Creating config drive at /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/disk.config#033[00m
Nov 29 03:15:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:19.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.702 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5u66twmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.837 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5u66twmj" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.863 233728 DEBUG nova.storage.rbd_utils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 27ab9d23-2828-4fea-8ff9-99751c744ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539552 nova_compute[233724]: 2025-11-29 08:15:19.867 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/disk.config 27ab9d23-2828-4fea-8ff9-99751c744ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.000 233728 DEBUG oslo_concurrency.processutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/disk.config 27ab9d23-2828-4fea-8ff9-99751c744ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.001 233728 INFO nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Deleting local config drive /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:20 np0005539552 kernel: tap1ae85427-ec: entered promiscuous mode
Nov 29 03:15:20 np0005539552 NetworkManager[48926]: <info>  [1764404120.0491] manager: (tap1ae85427-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.078 233728 DEBUG nova.network.neutron [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Updated VIF entry in instance network info cache for port 1ae85427-ec76-4ddf-ba54-fa21a4d16202. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.079 233728 DEBUG nova.network.neutron [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Updating instance_info_cache with network_info: [{"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.095 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:20Z|00333|binding|INFO|Claiming lport 1ae85427-ec76-4ddf-ba54-fa21a4d16202 for this chassis.
Nov 29 03:15:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:20Z|00334|binding|INFO|1ae85427-ec76-4ddf-ba54-fa21a4d16202: Claiming fa:16:3e:47:78:01 10.100.0.3
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 systemd-udevd[272597]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.099 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.103 233728 DEBUG oslo_concurrency.lockutils [req-650653d8-e76d-4fc8-a7a4-288672d0453a req-bd5e813a-b0b3-47dc-abaf-fd4814c9f870 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-27ab9d23-2828-4fea-8ff9-99751c744ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.109 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:78:01 10.100.0.3'], port_security=['fa:16:3e:47:78:01 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '27ab9d23-2828-4fea-8ff9-99751c744ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1ae85427-ec76-4ddf-ba54-fa21a4d16202) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.110 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1ae85427-ec76-4ddf-ba54-fa21a4d16202 in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 bound to our chassis#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.111 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab70b036-b5ab-4377-b081-f4b82fdb05c5#033[00m
Nov 29 03:15:20 np0005539552 NetworkManager[48926]: <info>  [1764404120.1162] device (tap1ae85427-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:20 np0005539552 NetworkManager[48926]: <info>  [1764404120.1174] device (tap1ae85427-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.125 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[42f6f526-a94f-489c-b768-c47866311990]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.126 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab70b036-b1 in ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.129 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab70b036-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.129 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe17c41-ee94-4753-ad06-0ca87eafbd56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.130 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79b33352-bced-4f88-aa57-45d91e84700f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 systemd-machined[196379]: New machine qemu-37-instance-00000061.
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.145 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[571879a5-604f-4b83-ae1f-a67b7249dd99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 systemd[1]: Started Virtual Machine qemu-37-instance-00000061.
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.170 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.171 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6e905794-3be2-4293-9c33-ef16fe754fcf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:20Z|00335|binding|INFO|Setting lport 1ae85427-ec76-4ddf-ba54-fa21a4d16202 ovn-installed in OVS
Nov 29 03:15:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:20Z|00336|binding|INFO|Setting lport 1ae85427-ec76-4ddf-ba54-fa21a4d16202 up in Southbound
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.198 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f92c109e-d984-4dc5-b2b9-4d2103408569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 NetworkManager[48926]: <info>  [1764404120.2069] manager: (tapab70b036-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Nov 29 03:15:20 np0005539552 systemd-udevd[272601]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.206 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f76bcd-5ddf-486f-886e-5063e0d2f546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.234 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd3bd16-fde8-425b-9f54-4698d7dcdeaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.237 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a7285723-04b0-458e-aad9-80d9e7df5769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 NetworkManager[48926]: <info>  [1764404120.2560] device (tapab70b036-b0): carrier: link connected
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.261 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbe12bd-2c26-4c04-a4e7-ed0186365898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.274 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[764b39a1-9947-4b78-9166-3aded26ada17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716594, 'reachable_time': 40748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272633, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.288 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4ec615-8067-42e5-b13c-e4b7a83bdaae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:436f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716594, 'tstamp': 716594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272634, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.307 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[60b8f68d-100a-4653-8e86-0e85e7cc00c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716594, 'reachable_time': 40748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272635, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.340 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[90a51428-ceef-45ff-bc05-954b37b48ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.398 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaa88ab-d2e8-4fa1-b753-473629cc4cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.399 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.399 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.400 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab70b036-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 NetworkManager[48926]: <info>  [1764404120.4022] manager: (tapab70b036-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Nov 29 03:15:20 np0005539552 kernel: tapab70b036-b0: entered promiscuous mode
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.403 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab70b036-b0, col_values=(('external_ids', {'iface-id': 'b68836c1-a6f1-4b18-aa8f-0c55204e98dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.404 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:20Z|00337|binding|INFO|Releasing lport b68836c1-a6f1-4b18-aa8f-0c55204e98dc from this chassis (sb_readonly=0)
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.420 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.421 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cc734fa8-89fa-4919-8bcf-0e5e3f74aa85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.422 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.422 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'env', 'PROCESS_TAG=haproxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab70b036-b5ab-4377-b081-f4b82fdb05c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.446 233728 DEBUG nova.compute.manager [req-362ea4c3-b32f-4ca5-ab77-071d728d3b50 req-76988be3-1505-493c-89e2-c5a45aee5477 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.447 233728 DEBUG oslo_concurrency.lockutils [req-362ea4c3-b32f-4ca5-ab77-071d728d3b50 req-76988be3-1505-493c-89e2-c5a45aee5477 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.447 233728 DEBUG oslo_concurrency.lockutils [req-362ea4c3-b32f-4ca5-ab77-071d728d3b50 req-76988be3-1505-493c-89e2-c5a45aee5477 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.447 233728 DEBUG oslo_concurrency.lockutils [req-362ea4c3-b32f-4ca5-ab77-071d728d3b50 req-76988be3-1505-493c-89e2-c5a45aee5477 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539552 nova_compute[233724]: 2025-11-29 08:15:20.448 233728 DEBUG nova.compute.manager [req-362ea4c3-b32f-4ca5-ab77-071d728d3b50 req-76988be3-1505-493c-89e2-c5a45aee5477 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Processing event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.622 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.623 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:20.623 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539552 podman[272668]: 2025-11-29 08:15:20.810726495 +0000 UTC m=+0.046902191 container create 62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:15:20 np0005539552 systemd[1]: Started libpod-conmon-62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210.scope.
Nov 29 03:15:20 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:15:20 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3594bb8fbe02bbab9ba38403d469998df0739a6a7bc72dbafb000edc837e0ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:20 np0005539552 podman[272668]: 2025-11-29 08:15:20.785100926 +0000 UTC m=+0.021276642 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:20 np0005539552 podman[272668]: 2025-11-29 08:15:20.888144735 +0000 UTC m=+0.124320451 container init 62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:15:20 np0005539552 podman[272668]: 2025-11-29 08:15:20.893219531 +0000 UTC m=+0.129395227 container start 62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:15:20 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [NOTICE]   (272687) : New worker (272689) forked
Nov 29 03:15:20 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [NOTICE]   (272687) : Loading success.
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.088 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.110 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404121.1099293, 27ab9d23-2828-4fea-8ff9-99751c744ded => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.110 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.112 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.119 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.123 233728 INFO nova.virt.libvirt.driver [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Instance spawned successfully.#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.123 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.140 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.149 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.157 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.158 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.159 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.159 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.160 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.161 233728 DEBUG nova.virt.libvirt.driver [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.173 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.174 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404121.1125662, 27ab9d23-2828-4fea-8ff9-99751c744ded => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.175 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.196 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.199 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404121.1181533, 27ab9d23-2828-4fea-8ff9-99751c744ded => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.199 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.221 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.223 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:21.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.230 233728 INFO nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Took 9.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.232 233728 DEBUG nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.239 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.284 233728 INFO nova.compute.manager [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Took 10.24 seconds to build instance.#033[00m
Nov 29 03:15:21 np0005539552 nova_compute[233724]: 2025-11-29 08:15:21.304 233728 DEBUG oslo_concurrency.lockutils [None req-f2e4daa3-fe4c-44e8-80c3-551dc5911d21 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:21.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:22 np0005539552 nova_compute[233724]: 2025-11-29 08:15:22.577 233728 DEBUG nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:22 np0005539552 nova_compute[233724]: 2025-11-29 08:15:22.578 233728 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:22 np0005539552 nova_compute[233724]: 2025-11-29 08:15:22.578 233728 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:22 np0005539552 nova_compute[233724]: 2025-11-29 08:15:22.578 233728 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:22 np0005539552 nova_compute[233724]: 2025-11-29 08:15:22.579 233728 DEBUG nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] No waiting events found dispatching network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:22 np0005539552 nova_compute[233724]: 2025-11-29 08:15:22.579 233728 WARNING nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received unexpected event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:15:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.777 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.956 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.957 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:15:23 np0005539552 nova_compute[233724]: 2025-11-29 08:15:23.958 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3301656376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.404 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.535 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.536 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.556 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.556 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.557 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.557 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.558 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.559 233728 INFO nova.compute.manager [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Terminating instance#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.560 233728 DEBUG nova.compute.manager [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:24 np0005539552 kernel: tap1ae85427-ec (unregistering): left promiscuous mode
Nov 29 03:15:24 np0005539552 NetworkManager[48926]: <info>  [1764404124.5971] device (tap1ae85427-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:24Z|00338|binding|INFO|Releasing lport 1ae85427-ec76-4ddf-ba54-fa21a4d16202 from this chassis (sb_readonly=0)
Nov 29 03:15:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:24Z|00339|binding|INFO|Setting lport 1ae85427-ec76-4ddf-ba54-fa21a4d16202 down in Southbound
Nov 29 03:15:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:24Z|00340|binding|INFO|Removing iface tap1ae85427-ec ovn-installed in OVS
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.611 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.620 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:78:01 10.100.0.3'], port_security=['fa:16:3e:47:78:01 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '27ab9d23-2828-4fea-8ff9-99751c744ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1ae85427-ec76-4ddf-ba54-fa21a4d16202) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.621 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1ae85427-ec76-4ddf-ba54-fa21a4d16202 in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 unbound from our chassis#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.622 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab70b036-b5ab-4377-b081-f4b82fdb05c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.623 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2931c4a8-2b3d-4c7e-b727-5a394c6d3125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.623 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace which is not needed anymore#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.627 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539552 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 29 03:15:24 np0005539552 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000061.scope: Consumed 4.528s CPU time.
Nov 29 03:15:24 np0005539552 systemd-machined[196379]: Machine qemu-37-instance-00000061 terminated.
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.732 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.734 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4269MB free_disk=20.807247161865234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.734 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.734 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [NOTICE]   (272687) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:24 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [NOTICE]   (272687) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:24 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [WARNING]  (272687) : Exiting Master process...
Nov 29 03:15:24 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [WARNING]  (272687) : Exiting Master process...
Nov 29 03:15:24 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [ALERT]    (272687) : Current worker (272689) exited with code 143 (Terminated)
Nov 29 03:15:24 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[272683]: [WARNING]  (272687) : All workers exited. Exiting... (0)
Nov 29 03:15:24 np0005539552 systemd[1]: libpod-62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210.scope: Deactivated successfully.
Nov 29 03:15:24 np0005539552 conmon[272683]: conmon 62ea53cbcffce23f657c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210.scope/container/memory.events
Nov 29 03:15:24 np0005539552 podman[272787]: 2025-11-29 08:15:24.756478384 +0000 UTC m=+0.044670051 container died 62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:15:24 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:24 np0005539552 systemd[1]: var-lib-containers-storage-overlay-b3594bb8fbe02bbab9ba38403d469998df0739a6a7bc72dbafb000edc837e0ea-merged.mount: Deactivated successfully.
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.789 233728 INFO nova.virt.libvirt.driver [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Instance destroyed successfully.#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.790 233728 DEBUG nova.objects.instance [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'resources' on Instance uuid 27ab9d23-2828-4fea-8ff9-99751c744ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:24 np0005539552 podman[272787]: 2025-11-29 08:15:24.798348849 +0000 UTC m=+0.086540516 container cleanup 62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:15:24 np0005539552 systemd[1]: libpod-conmon-62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210.scope: Deactivated successfully.
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.838 233728 DEBUG nova.virt.libvirt.vif [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1895741234',display_name='tempest-tempest.common.compute-instance-1895741234-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1895741234-1',id=97,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-vw058oln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:21Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=27ab9d23-2828-4fea-8ff9-99751c744ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.838 233728 DEBUG nova.network.os_vif_util [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "address": "fa:16:3e:47:78:01", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ae85427-ec", "ovs_interfaceid": "1ae85427-ec76-4ddf-ba54-fa21a4d16202", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.839 233728 DEBUG nova.network.os_vif_util [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.839 233728 DEBUG os_vif [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.841 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.841 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ae85427-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.845 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.848 233728 INFO os_vif [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:78:01,bridge_name='br-int',has_traffic_filtering=True,id=1ae85427-ec76-4ddf-ba54-fa21a4d16202,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ae85427-ec')#033[00m
Nov 29 03:15:24 np0005539552 podman[272829]: 2025-11-29 08:15:24.851789694 +0000 UTC m=+0.035342650 container remove 62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.857 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[72e5f6ef-d6e3-423a-8735-0c322f4e883d]: (4, ('Sat Nov 29 08:15:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210)\n62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210\nSat Nov 29 08:15:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210)\n62ea53cbcffce23f657cda80b9c81e7af6684d8a1eb007e0c38766e572acb210\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.858 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[14b0a29b-a3f2-4ba7-ba4d-1cbd1cac2fba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.859 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539552 kernel: tapab70b036-b0: left promiscuous mode
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.864 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.877 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[271138fd-bbfa-43d6-ac4e-dbfa86aa0c8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.891 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d07068a-cb5c-4604-9000-0a159f1685f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.892 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[822e3010-5967-4766-8c35-e2e38b8c7ad7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.907 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[12f776f1-d6cc-44c4-ae43-a0d4b76ea5b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716588, 'reachable_time': 20643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272862, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.909 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:24.909 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9d453575-16f1-4e9b-9210-73f631fc047c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539552 systemd[1]: run-netns-ovnmeta\x2dab70b036\x2db5ab\x2d4377\x2db081\x2df4b82fdb05c5.mount: Deactivated successfully.
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.943 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 27ab9d23-2828-4fea-8ff9-99751c744ded actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.943 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:15:24 np0005539552 nova_compute[233724]: 2025-11-29 08:15:24.944 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.215 233728 DEBUG nova.compute.manager [req-938c1fa3-feb4-4931-9539-050786d5b7a4 req-43c9b1af-9018-491e-85ca-1de82626be9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-vif-unplugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.215 233728 DEBUG oslo_concurrency.lockutils [req-938c1fa3-feb4-4931-9539-050786d5b7a4 req-43c9b1af-9018-491e-85ca-1de82626be9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.215 233728 DEBUG oslo_concurrency.lockutils [req-938c1fa3-feb4-4931-9539-050786d5b7a4 req-43c9b1af-9018-491e-85ca-1de82626be9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.216 233728 DEBUG oslo_concurrency.lockutils [req-938c1fa3-feb4-4931-9539-050786d5b7a4 req-43c9b1af-9018-491e-85ca-1de82626be9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.216 233728 DEBUG nova.compute.manager [req-938c1fa3-feb4-4931-9539-050786d5b7a4 req-43c9b1af-9018-491e-85ca-1de82626be9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] No waiting events found dispatching network-vif-unplugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.216 233728 DEBUG nova.compute.manager [req-938c1fa3-feb4-4931-9539-050786d5b7a4 req-43c9b1af-9018-491e-85ca-1de82626be9a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-vif-unplugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:15:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:25.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.309 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2726081362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.734 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.740 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.774 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.813 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:15:25 np0005539552 nova_compute[233724]: 2025-11-29 08:15:25.814 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.090 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.217 233728 INFO nova.virt.libvirt.driver [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Deleting instance files /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded_del#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.218 233728 INFO nova.virt.libvirt.driver [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Deletion of /var/lib/nova/instances/27ab9d23-2828-4fea-8ff9-99751c744ded_del complete#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.291 233728 INFO nova.compute.manager [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Took 1.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.291 233728 DEBUG oslo.service.loopingcall [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.292 233728 DEBUG nova.compute.manager [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:15:26 np0005539552 nova_compute[233724]: 2025-11-29 08:15:26.292 233728 DEBUG nova.network.neutron [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:15:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:27.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.401 233728 DEBUG nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.401 233728 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.401 233728 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.402 233728 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.402 233728 DEBUG nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] No waiting events found dispatching network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.402 233728 WARNING nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received unexpected event network-vif-plugged-1ae85427-ec76-4ddf-ba54-fa21a4d16202 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:15:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:27.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.803 233728 DEBUG nova.network.neutron [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.872 233728 INFO nova.compute.manager [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.960 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:27 np0005539552 nova_compute[233724]: 2025-11-29 08:15:27.961 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:28 np0005539552 nova_compute[233724]: 2025-11-29 08:15:28.315 233728 DEBUG oslo_concurrency.processutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3676672610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:28 np0005539552 nova_compute[233724]: 2025-11-29 08:15:28.757 233728 DEBUG oslo_concurrency.processutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:28 np0005539552 nova_compute[233724]: 2025-11-29 08:15:28.764 233728 DEBUG nova.compute.provider_tree [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:28 np0005539552 nova_compute[233724]: 2025-11-29 08:15:28.801 233728 DEBUG nova.scheduler.client.report [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:28 np0005539552 nova_compute[233724]: 2025-11-29 08:15:28.827 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:28 np0005539552 nova_compute[233724]: 2025-11-29 08:15:28.860 233728 INFO nova.scheduler.client.report [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Deleted allocations for instance 27ab9d23-2828-4fea-8ff9-99751c744ded#033[00m
Nov 29 03:15:28 np0005539552 podman[272911]: 2025-11-29 08:15:28.964412477 +0000 UTC m=+0.050182369 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:15:28 np0005539552 podman[272910]: 2025-11-29 08:15:28.972866335 +0000 UTC m=+0.062334086 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Nov 29 03:15:29 np0005539552 podman[272912]: 2025-11-29 08:15:29.000580349 +0000 UTC m=+0.079624690 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.107 233728 DEBUG oslo_concurrency.lockutils [None req-1456cc64-b726-473a-8c7c-c2db8797cfe2 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "27ab9d23-2828-4fea-8ff9-99751c744ded" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:29.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.531 233728 DEBUG nova.compute.manager [req-88fa1af0-ef2e-462c-bdc7-b15441f46b7b req-886e36d5-223d-4a2b-8476-0ac0d677feb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Received event network-vif-deleted-1ae85427-ec76-4ddf-ba54-fa21a4d16202 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:29.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.814 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.815 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.815 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:29 np0005539552 nova_compute[233724]: 2025-11-29 08:15:29.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:30 np0005539552 nova_compute[233724]: 2025-11-29 08:15:30.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:30 np0005539552 nova_compute[233724]: 2025-11-29 08:15:30.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:15:31 np0005539552 nova_compute[233724]: 2025-11-29 08:15:31.091 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:31 np0005539552 nova_compute[233724]: 2025-11-29 08:15:31.102 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:15:31 np0005539552 nova_compute[233724]: 2025-11-29 08:15:31.103 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:31.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:31.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:31 np0005539552 nova_compute[233724]: 2025-11-29 08:15:31.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:32 np0005539552 nova_compute[233724]: 2025-11-29 08:15:32.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:32 np0005539552 nova_compute[233724]: 2025-11-29 08:15:32.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:33.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:33.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:34 np0005539552 nova_compute[233724]: 2025-11-29 08:15:34.848 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:35.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:36 np0005539552 nova_compute[233724]: 2025-11-29 08:15:36.093 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.027 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.027 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.057 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.177 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.178 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.185 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.185 233728 INFO nova.compute.claims [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:15:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.452 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:37.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2792369445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.871 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.877 233728 DEBUG nova.compute.provider_tree [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.897 233728 DEBUG nova.scheduler.client.report [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.942 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:37 np0005539552 nova_compute[233724]: 2025-11-29 08:15:37.943 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.012 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.012 233728 DEBUG nova.network.neutron [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.052 233728 INFO nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.088 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.219 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.221 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.222 233728 INFO nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Creating image(s)#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.260 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.288 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.309 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.313 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.369 233728 DEBUG nova.policy [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4c89c9953854ecf96a802dc6055db9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.392 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.393 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.394 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.394 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.417 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.421 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.928 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:38 np0005539552 nova_compute[233724]: 2025-11-29 08:15:38.992 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] resizing rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.076 233728 DEBUG nova.objects.instance [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'migration_context' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.098 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.098 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Ensure instance console log exists: /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.099 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.100 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.100 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.249 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.250 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:39.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.324 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.435 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.436 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.441 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.441 233728 INFO nova.compute.claims [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:15:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.768 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.793 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404124.7862852, 27ab9d23-2828-4fea-8ff9-99751c744ded => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.794 233728 INFO nova.compute.manager [-] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.842 233728 DEBUG nova.compute.manager [None req-c41b4515-84e7-4c82-a947-9bdff8beade2 - - - - - -] [instance: 27ab9d23-2828-4fea-8ff9-99751c744ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:39 np0005539552 nova_compute[233724]: 2025-11-29 08:15:39.850 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:40 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2195616368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.213 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.219 233728 DEBUG nova.compute.provider_tree [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.241 233728 DEBUG nova.scheduler.client.report [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.268 233728 DEBUG nova.network.neutron [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Successfully created port: 3ee6f630-8898-496e-8d41-aec58022b039 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.275 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.275 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.405 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.405 233728 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.478 233728 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.503 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.715 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.716 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.716 233728 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Creating image(s)#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.744 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.773 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.800 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.805 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.890 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.891 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.892 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.892 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.917 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:40 np0005539552 nova_compute[233724]: 2025-11-29 08:15:40.921 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 61d117f5-8412-446a-a3b7-cae3db576105_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.094 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.165 233728 DEBUG nova.policy [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0bd9df09f1324e3f9dba099f03ffe1c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '647c3591c2b940409293763c6c83e58c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:41.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.410 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 61d117f5-8412-446a-a3b7-cae3db576105_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.541 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] resizing rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.650 233728 DEBUG nova.objects.instance [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'migration_context' on Instance uuid 61d117f5-8412-446a-a3b7-cae3db576105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.677 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.678 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Ensure instance console log exists: /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.679 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.679 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:41 np0005539552 nova_compute[233724]: 2025-11-29 08:15:41.680 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.043 233728 DEBUG nova.network.neutron [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Successfully updated port: 3ee6f630-8898-496e-8d41-aec58022b039 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.058 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.058 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquired lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.058 233728 DEBUG nova.network.neutron [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.114 233728 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Successfully created port: 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.173 233728 DEBUG nova.compute.manager [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-changed-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.173 233728 DEBUG nova.compute.manager [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Refreshing instance network info cache due to event network-changed-3ee6f630-8898-496e-8d41-aec58022b039. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.174 233728 DEBUG oslo_concurrency.lockutils [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:43.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:43 np0005539552 nova_compute[233724]: 2025-11-29 08:15:43.347 233728 DEBUG nova.network.neutron [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042745546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.606 233728 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Successfully updated port: 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.630 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "refresh_cache-61d117f5-8412-446a-a3b7-cae3db576105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.630 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquired lock "refresh_cache-61d117f5-8412-446a-a3b7-cae3db576105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.631 233728 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.835 233728 DEBUG nova.network.neutron [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Updating instance_info_cache with network_info: [{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.853 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.869 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Releasing lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.870 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance network_info: |[{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.870 233728 DEBUG oslo_concurrency.lockutils [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.870 233728 DEBUG nova.network.neutron [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Refreshing network info cache for port 3ee6f630-8898-496e-8d41-aec58022b039 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.873 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Start _get_guest_xml network_info=[{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.876 233728 WARNING nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.882 233728 DEBUG nova.virt.libvirt.host [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.883 233728 DEBUG nova.virt.libvirt.host [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.887 233728 DEBUG nova.virt.libvirt.host [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.887 233728 DEBUG nova.virt.libvirt.host [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.888 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.888 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.889 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.889 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.889 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.889 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.890 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.890 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.890 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.890 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.890 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.891 233728 DEBUG nova.virt.hardware [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:44 np0005539552 nova_compute[233724]: 2025-11-29 08:15:44.893 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.119 233728 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2395939766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.354 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.382 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.386 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.414 233728 DEBUG nova.compute.manager [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-changed-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.415 233728 DEBUG nova.compute.manager [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Refreshing instance network info cache due to event network-changed-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.415 233728 DEBUG oslo_concurrency.lockutils [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-61d117f5-8412-446a-a3b7-cae3db576105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2435636844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.834 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.836 233728 DEBUG nova.virt.libvirt.vif [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1556842152',display_name='tempest-ServerRescueTestJSON-server-1556842152',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1556842152',id=99,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-02j02pz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:38Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=7dde4b1f-b13a-43ab-b40d-343e3c6e143e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.836 233728 DEBUG nova.network.os_vif_util [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.837 233728 DEBUG nova.network.os_vif_util [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.838 233728 DEBUG nova.objects.instance [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.865 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <uuid>7dde4b1f-b13a-43ab-b40d-343e3c6e143e</uuid>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <name>instance-00000063</name>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueTestJSON-server-1556842152</nova:name>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:15:44</nova:creationTime>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:user uuid="f4c89c9953854ecf96a802dc6055db9d">tempest-ServerRescueTestJSON-640276387-project-member</nova:user>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:project uuid="4da7fb77734a4135a6f8b5b70bed7a2f">tempest-ServerRescueTestJSON-640276387</nova:project>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <nova:port uuid="3ee6f630-8898-496e-8d41-aec58022b039">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <entry name="serial">7dde4b1f-b13a-43ab-b40d-343e3c6e143e</entry>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <entry name="uuid">7dde4b1f-b13a-43ab-b40d-343e3c6e143e</entry>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b2:5a:c4"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <target dev="tap3ee6f630-88"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/console.log" append="off"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:15:45 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:15:45 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:15:45 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:15:45 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.866 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Preparing to wait for external event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.867 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.868 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.868 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.869 233728 DEBUG nova.virt.libvirt.vif [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1556842152',display_name='tempest-ServerRescueTestJSON-server-1556842152',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1556842152',id=99,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-02j02pz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:38Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=7dde4b1f-b13a-43ab-b40d-343e3c6e143e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.869 233728 DEBUG nova.network.os_vif_util [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.870 233728 DEBUG nova.network.os_vif_util [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.871 233728 DEBUG os_vif [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.872 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.873 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.876 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.876 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ee6f630-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.876 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ee6f630-88, col_values=(('external_ids', {'iface-id': '3ee6f630-8898-496e-8d41-aec58022b039', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:5a:c4', 'vm-uuid': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539552 NetworkManager[48926]: <info>  [1764404145.8790] manager: (tap3ee6f630-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.881 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.886 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.887 233728 INFO os_vif [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88')#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.964 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.965 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.965 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No VIF found with MAC fa:16:3e:b2:5a:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.966 233728 INFO nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Using config drive#033[00m
Nov 29 03:15:45 np0005539552 nova_compute[233724]: 2025-11-29 08:15:45.993 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.095 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.622 233728 INFO nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Creating config drive at /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.628 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsddseoco execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.758 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsddseoco" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.790 233728 DEBUG nova.storage.rbd_utils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.794 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.982 233728 DEBUG oslo_concurrency.processutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:46 np0005539552 nova_compute[233724]: 2025-11-29 08:15:46.983 233728 INFO nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Deleting local config drive /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.017 233728 DEBUG nova.network.neutron [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Updating instance_info_cache with network_info: [{"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:47 np0005539552 kernel: tap3ee6f630-88: entered promiscuous mode
Nov 29 03:15:47 np0005539552 NetworkManager[48926]: <info>  [1764404147.0271] manager: (tap3ee6f630-88): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Nov 29 03:15:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:47Z|00341|binding|INFO|Claiming lport 3ee6f630-8898-496e-8d41-aec58022b039 for this chassis.
Nov 29 03:15:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:47Z|00342|binding|INFO|3ee6f630-8898-496e-8d41-aec58022b039: Claiming fa:16:3e:b2:5a:c4 10.100.0.6
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.028 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.042 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Releasing lock "refresh_cache-61d117f5-8412-446a-a3b7-cae3db576105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.043 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Instance network_info: |[{"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.043 233728 DEBUG oslo_concurrency.lockutils [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-61d117f5-8412-446a-a3b7-cae3db576105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.043 233728 DEBUG nova.network.neutron [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Refreshing network info cache for port 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.046 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Start _get_guest_xml network_info=[{"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:47.048 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:47.049 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 bound to our chassis#033[00m
Nov 29 03:15:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:47.050 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:15:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:47.052 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3fd219-c6eb-4e1b-a033-4d46a28d32cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:47 np0005539552 systemd-udevd[273547]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.055 233728 WARNING nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:47 np0005539552 systemd-machined[196379]: New machine qemu-38-instance-00000063.
Nov 29 03:15:47 np0005539552 NetworkManager[48926]: <info>  [1764404147.0655] device (tap3ee6f630-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:47 np0005539552 NetworkManager[48926]: <info>  [1764404147.0665] device (tap3ee6f630-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.066 233728 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.067 233728 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.073 233728 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.073 233728 DEBUG nova.virt.libvirt.host [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.074 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.074 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.074 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.074 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.075 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.075 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.075 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:47 np0005539552 systemd[1]: Started Virtual Machine qemu-38-instance-00000063.
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.075 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.076 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.077 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.077 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.077 233728 DEBUG nova.virt.hardware [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.080 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:47Z|00343|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 ovn-installed in OVS
Nov 29 03:15:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:47Z|00344|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 up in Southbound
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.108 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.511 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404147.5111115, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.512 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2448880736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.535 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.539 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404147.5113797, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.539 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.541 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.569 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.572 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.597 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.599 233728 DEBUG nova.compute.manager [req-da5102a4-f2eb-42a4-b671-6b44db4ddf22 req-d9fc3d86-16ae-4c20-a716-72c609e442f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.599 233728 DEBUG oslo_concurrency.lockutils [req-da5102a4-f2eb-42a4-b671-6b44db4ddf22 req-d9fc3d86-16ae-4c20-a716-72c609e442f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.599 233728 DEBUG oslo_concurrency.lockutils [req-da5102a4-f2eb-42a4-b671-6b44db4ddf22 req-d9fc3d86-16ae-4c20-a716-72c609e442f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.600 233728 DEBUG oslo_concurrency.lockutils [req-da5102a4-f2eb-42a4-b671-6b44db4ddf22 req-d9fc3d86-16ae-4c20-a716-72c609e442f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.600 233728 DEBUG nova.compute.manager [req-da5102a4-f2eb-42a4-b671-6b44db4ddf22 req-d9fc3d86-16ae-4c20-a716-72c609e442f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Processing event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.601 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.606 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.607 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404147.6046343, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.607 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.613 233728 INFO nova.virt.libvirt.driver [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance spawned successfully.#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.614 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.659 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.667 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.670 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.670 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.672 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.672 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.673 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.673 233728 DEBUG nova.virt.libvirt.driver [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.727 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:47.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.770 233728 DEBUG nova.network.neutron [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Updated VIF entry in instance network info cache for port 3ee6f630-8898-496e-8d41-aec58022b039. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.771 233728 DEBUG nova.network.neutron [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Updating instance_info_cache with network_info: [{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.811 233728 DEBUG oslo_concurrency.lockutils [req-93c2594a-2439-4bc9-8675-b1a147983aa4 req-12af0406-9e33-4ae8-a118-59cabe98a9aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.825 233728 INFO nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Took 9.61 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.826 233728 DEBUG nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.951 233728 INFO nova.compute.manager [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Took 10.82 seconds to build instance.#033[00m
Nov 29 03:15:47 np0005539552 nova_compute[233724]: 2025-11-29 08:15:47.981 233728 DEBUG oslo_concurrency.lockutils [None req-7025723f-0636-4b26-a61d-f20fa0592fd4 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1830520960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.012 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.013 233728 DEBUG nova.virt.libvirt.vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-700323681',display_name='tempest-MultipleCreateTestJSON-server-700323681-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-700323681-2',id=101,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-a6zfs0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:40Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=61d117f5-8412-446a-a3b7-cae3db576105,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.014 233728 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.015 233728 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.016 233728 DEBUG nova.objects.instance [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 61d117f5-8412-446a-a3b7-cae3db576105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.036 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <uuid>61d117f5-8412-446a-a3b7-cae3db576105</uuid>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <name>instance-00000065</name>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:name>tempest-MultipleCreateTestJSON-server-700323681-2</nova:name>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:15:47</nova:creationTime>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:user uuid="0bd9df09f1324e3f9dba099f03ffe1c6">tempest-MultipleCreateTestJSON-2058984420-project-member</nova:user>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:project uuid="647c3591c2b940409293763c6c83e58c">tempest-MultipleCreateTestJSON-2058984420</nova:project>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <nova:port uuid="1cd399cb-1be3-43a4-bbea-c1a0f48fff1e">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <entry name="serial">61d117f5-8412-446a-a3b7-cae3db576105</entry>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <entry name="uuid">61d117f5-8412-446a-a3b7-cae3db576105</entry>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/61d117f5-8412-446a-a3b7-cae3db576105_disk">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/61d117f5-8412-446a-a3b7-cae3db576105_disk.config">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:74:d6:24"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <target dev="tap1cd399cb-1b"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/console.log" append="off"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:15:48 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:15:48 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:15:48 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:15:48 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.037 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Preparing to wait for external event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.038 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.038 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.038 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.039 233728 DEBUG nova.virt.libvirt.vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-700323681',display_name='tempest-MultipleCreateTestJSON-server-700323681-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-700323681-2',id=101,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-a6zfs0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:40Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=61d117f5-8412-446a-a3b7-cae3db576105,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.039 233728 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.039 233728 DEBUG nova.network.os_vif_util [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.040 233728 DEBUG os_vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.040 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.041 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.041 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.043 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.044 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cd399cb-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.044 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cd399cb-1b, col_values=(('external_ids', {'iface-id': '1cd399cb-1be3-43a4-bbea-c1a0f48fff1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:d6:24', 'vm-uuid': '61d117f5-8412-446a-a3b7-cae3db576105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.045 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:48 np0005539552 NetworkManager[48926]: <info>  [1764404148.0467] manager: (tap1cd399cb-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.048 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.051 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.052 233728 INFO os_vif [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b')#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.145 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.145 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.145 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] No VIF found with MAC fa:16:3e:74:d6:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.146 233728 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Using config drive#033[00m
Nov 29 03:15:48 np0005539552 nova_compute[233724]: 2025-11-29 08:15:48.171 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:49.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.664 233728 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Creating config drive at /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/disk.config#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.671 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptu2322hw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.752 233728 DEBUG nova.compute.manager [req-e8d7fbbb-8936-4c1f-95d5-9c51f50ca47e req-4280b2d6-2f1b-4d8f-8b46-6ff1d2245db6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.753 233728 DEBUG oslo_concurrency.lockutils [req-e8d7fbbb-8936-4c1f-95d5-9c51f50ca47e req-4280b2d6-2f1b-4d8f-8b46-6ff1d2245db6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.753 233728 DEBUG oslo_concurrency.lockutils [req-e8d7fbbb-8936-4c1f-95d5-9c51f50ca47e req-4280b2d6-2f1b-4d8f-8b46-6ff1d2245db6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.753 233728 DEBUG oslo_concurrency.lockutils [req-e8d7fbbb-8936-4c1f-95d5-9c51f50ca47e req-4280b2d6-2f1b-4d8f-8b46-6ff1d2245db6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.753 233728 DEBUG nova.compute.manager [req-e8d7fbbb-8936-4c1f-95d5-9c51f50ca47e req-4280b2d6-2f1b-4d8f-8b46-6ff1d2245db6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.754 233728 WARNING nova.compute.manager [req-e8d7fbbb-8936-4c1f-95d5-9c51f50ca47e req-4280b2d6-2f1b-4d8f-8b46-6ff1d2245db6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:15:49 np0005539552 nova_compute[233724]: 2025-11-29 08:15:49.804 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptu2322hw" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:50 np0005539552 nova_compute[233724]: 2025-11-29 08:15:50.098 233728 DEBUG nova.storage.rbd_utils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] rbd image 61d117f5-8412-446a-a3b7-cae3db576105_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:50 np0005539552 nova_compute[233724]: 2025-11-29 08:15:50.105 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/disk.config 61d117f5-8412-446a-a3b7-cae3db576105_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:50 np0005539552 nova_compute[233724]: 2025-11-29 08:15:50.786 233728 DEBUG oslo_concurrency.processutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/disk.config 61d117f5-8412-446a-a3b7-cae3db576105_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:50 np0005539552 nova_compute[233724]: 2025-11-29 08:15:50.787 233728 INFO nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Deleting local config drive /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:50 np0005539552 kernel: tap1cd399cb-1b: entered promiscuous mode
Nov 29 03:15:50 np0005539552 NetworkManager[48926]: <info>  [1764404150.8314] manager: (tap1cd399cb-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 29 03:15:50 np0005539552 nova_compute[233724]: 2025-11-29 08:15:50.833 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:50Z|00345|binding|INFO|Claiming lport 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e for this chassis.
Nov 29 03:15:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:50Z|00346|binding|INFO|1cd399cb-1be3-43a4-bbea-c1a0f48fff1e: Claiming fa:16:3e:74:d6:24 10.100.0.13
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.838 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:d6:24 10.100.0.13'], port_security=['fa:16:3e:74:d6:24 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61d117f5-8412-446a-a3b7-cae3db576105', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.839 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 bound to our chassis#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.841 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab70b036-b5ab-4377-b081-f4b82fdb05c5#033[00m
Nov 29 03:15:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:50Z|00347|binding|INFO|Setting lport 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e ovn-installed in OVS
Nov 29 03:15:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:50Z|00348|binding|INFO|Setting lport 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e up in Southbound
Nov 29 03:15:50 np0005539552 nova_compute[233724]: 2025-11-29 08:15:50.855 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.856 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[10b0e510-2120-467e-b5ad-e9f487376550]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.856 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab70b036-b1 in ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.859 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab70b036-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.859 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0219bcf7-582b-4f6d-987b-d2996d88c69c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.863 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b4233817-453e-4974-ad62-945d2a57e4bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 systemd-machined[196379]: New machine qemu-39-instance-00000065.
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.881 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[452b445a-a69c-40bd-9888-a0b78a704410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 systemd[1]: Started Virtual Machine qemu-39-instance-00000065.
Nov 29 03:15:50 np0005539552 systemd-udevd[273738]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:50 np0005539552 NetworkManager[48926]: <info>  [1764404150.9038] device (tap1cd399cb-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:50 np0005539552 NetworkManager[48926]: <info>  [1764404150.9050] device (tap1cd399cb-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.905 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8e6002-c262-4904-9cd7-0b0dca098154]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.931 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e58bddd8-4bb2-4863-bfa5-cce4b263538c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 systemd-udevd[273741]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:50 np0005539552 NetworkManager[48926]: <info>  [1764404150.9455] manager: (tapab70b036-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.944 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[00f458bc-308a-40eb-a547-f2b9a1f66714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.975 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8d4eda-8574-4fc4-adb1-34cddcc45800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:50.980 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5f5880-3822-48a0-b8b9-da0f376c5a91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 NetworkManager[48926]: <info>  [1764404151.0069] device (tapab70b036-b0): carrier: link connected
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.013 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2fac4e99-bf9a-446d-8955-a15b667d27e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.032 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[20820f3c-1fcb-477d-89d8-c2b538276002]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719669, 'reachable_time': 16818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273769, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.046 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d029a410-a39c-4c0b-94ce-5737189391f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:436f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719669, 'tstamp': 719669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273770, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.066 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cd041236-842a-4d0c-8f02-1e5284972b60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab70b036-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:43:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719669, 'reachable_time': 16818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273771, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.102 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a68d9abd-043d-41e8-a86b-5d447a59a131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.115 233728 DEBUG nova.network.neutron [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Updated VIF entry in instance network info cache for port 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.116 233728 DEBUG nova.network.neutron [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Updating instance_info_cache with network_info: [{"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.155 233728 DEBUG oslo_concurrency.lockutils [req-25d8306f-8076-4394-a244-7b0e6542501e req-a84a3ac0-85e4-415a-9645-0e693603ff60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-61d117f5-8412-446a-a3b7-cae3db576105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.171 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6a0f2a-52ff-4358-82fb-ffbf05e4c3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.173 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.174 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.174 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab70b036-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:51 np0005539552 kernel: tapab70b036-b0: entered promiscuous mode
Nov 29 03:15:51 np0005539552 NetworkManager[48926]: <info>  [1764404151.1769] manager: (tapab70b036-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.176 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.184 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab70b036-b0, col_values=(('external_ids', {'iface-id': 'b68836c1-a6f1-4b18-aa8f-0c55204e98dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:51Z|00349|binding|INFO|Releasing lport b68836c1-a6f1-4b18-aa8f-0c55204e98dc from this chassis (sb_readonly=0)
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.186 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.193 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.194 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[739a4a2a-638b-4e25-80b2-62b7f8c4fe21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.194 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ab70b036-b5ab-4377-b081-f4b82fdb05c5.pid.haproxy
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ab70b036-b5ab-4377-b081-f4b82fdb05c5
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:51.195 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'env', 'PROCESS_TAG=haproxy-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab70b036-b5ab-4377-b081-f4b82fdb05c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:51 np0005539552 podman[273828]: 2025-11-29 08:15:51.566424677 +0000 UTC m=+0.052374879 container create ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:15:51 np0005539552 systemd[1]: Started libpod-conmon-ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46.scope.
Nov 29 03:15:51 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:15:51 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf51b676101221032d12e4d3c4c37608388bb664f00514d027059de3ebb185ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:51 np0005539552 podman[273828]: 2025-11-29 08:15:51.539544624 +0000 UTC m=+0.025494846 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:51 np0005539552 podman[273828]: 2025-11-29 08:15:51.636089328 +0000 UTC m=+0.122039530 container init ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:15:51 np0005539552 podman[273828]: 2025-11-29 08:15:51.641807582 +0000 UTC m=+0.127757784 container start ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:15:51 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [NOTICE]   (273863) : New worker (273866) forked
Nov 29 03:15:51 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [NOTICE]   (273863) : Loading success.
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.698 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404151.698447, 61d117f5-8412-446a-a3b7-cae3db576105 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.699 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.742 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.746 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404151.699747, 61d117f5-8412-446a-a3b7-cae3db576105 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.746 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:15:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.3 total, 600.0 interval#012Cumulative writes: 36K writes, 150K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.87 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 49.93 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4610 syncs, 2.57 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.770 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.773 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:51 np0005539552 nova_compute[233724]: 2025-11-29 08:15:51.821 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:52 np0005539552 nova_compute[233724]: 2025-11-29 08:15:52.286 233728 INFO nova.compute.manager [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Rescuing#033[00m
Nov 29 03:15:52 np0005539552 nova_compute[233724]: 2025-11-29 08:15:52.286 233728 DEBUG oslo_concurrency.lockutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:52 np0005539552 nova_compute[233724]: 2025-11-29 08:15:52.287 233728 DEBUG oslo_concurrency.lockutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquired lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:52 np0005539552 nova_compute[233724]: 2025-11-29 08:15:52.287 233728 DEBUG nova.network.neutron [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:53 np0005539552 nova_compute[233724]: 2025-11-29 08:15:53.047 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:53.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.230 233728 DEBUG nova.compute.manager [req-95b12bde-7113-4e8a-8919-837f9b38d934 req-3b0f5058-6ece-4a56-9358-5756fafbe7d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.231 233728 DEBUG oslo_concurrency.lockutils [req-95b12bde-7113-4e8a-8919-837f9b38d934 req-3b0f5058-6ece-4a56-9358-5756fafbe7d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.231 233728 DEBUG oslo_concurrency.lockutils [req-95b12bde-7113-4e8a-8919-837f9b38d934 req-3b0f5058-6ece-4a56-9358-5756fafbe7d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.232 233728 DEBUG oslo_concurrency.lockutils [req-95b12bde-7113-4e8a-8919-837f9b38d934 req-3b0f5058-6ece-4a56-9358-5756fafbe7d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.232 233728 DEBUG nova.compute.manager [req-95b12bde-7113-4e8a-8919-837f9b38d934 req-3b0f5058-6ece-4a56-9358-5756fafbe7d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Processing event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.233 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.244 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404155.2369938, 61d117f5-8412-446a-a3b7-cae3db576105 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.252 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.257 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.273 233728 INFO nova.virt.libvirt.driver [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Instance spawned successfully.#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.274 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.277 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.293 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.298 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.299 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.299 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.300 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.301 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.301 233728 DEBUG nova.virt.libvirt.driver [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.347 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.442 233728 INFO nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Took 14.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.443 233728 DEBUG nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.519 233728 INFO nova.compute.manager [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Took 16.15 seconds to build instance.#033[00m
Nov 29 03:15:55 np0005539552 nova_compute[233724]: 2025-11-29 08:15:55.538 233728 DEBUG oslo_concurrency.lockutils [None req-00f586ed-7633-49d8-9d62-883d06316131 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:56 np0005539552 nova_compute[233724]: 2025-11-29 08:15:56.099 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539552 nova_compute[233724]: 2025-11-29 08:15:56.437 233728 DEBUG nova.network.neutron [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Updating instance_info_cache with network_info: [{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:56 np0005539552 nova_compute[233724]: 2025-11-29 08:15:56.464 233728 DEBUG oslo_concurrency.lockutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Releasing lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:56 np0005539552 nova_compute[233724]: 2025-11-29 08:15:56.946 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:15:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:57 np0005539552 nova_compute[233724]: 2025-11-29 08:15:57.478 233728 DEBUG nova.compute.manager [req-7ae47f12-9dd4-40d1-9e93-020888c89969 req-d9f87e20-557f-408c-8e4c-c9bbb457a27a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:57 np0005539552 nova_compute[233724]: 2025-11-29 08:15:57.479 233728 DEBUG oslo_concurrency.lockutils [req-7ae47f12-9dd4-40d1-9e93-020888c89969 req-d9f87e20-557f-408c-8e4c-c9bbb457a27a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:57 np0005539552 nova_compute[233724]: 2025-11-29 08:15:57.479 233728 DEBUG oslo_concurrency.lockutils [req-7ae47f12-9dd4-40d1-9e93-020888c89969 req-d9f87e20-557f-408c-8e4c-c9bbb457a27a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:57 np0005539552 nova_compute[233724]: 2025-11-29 08:15:57.480 233728 DEBUG oslo_concurrency.lockutils [req-7ae47f12-9dd4-40d1-9e93-020888c89969 req-d9f87e20-557f-408c-8e4c-c9bbb457a27a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:57 np0005539552 nova_compute[233724]: 2025-11-29 08:15:57.480 233728 DEBUG nova.compute.manager [req-7ae47f12-9dd4-40d1-9e93-020888c89969 req-d9f87e20-557f-408c-8e4c-c9bbb457a27a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] No waiting events found dispatching network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:57 np0005539552 nova_compute[233724]: 2025-11-29 08:15:57.481 233728 WARNING nova.compute.manager [req-7ae47f12-9dd4-40d1-9e93-020888c89969 req-d9f87e20-557f-408c-8e4c-c9bbb457a27a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received unexpected event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:15:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.059 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.814 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.815 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.815 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.815 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.816 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.817 233728 INFO nova.compute.manager [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Terminating instance#033[00m
Nov 29 03:15:58 np0005539552 nova_compute[233724]: 2025-11-29 08:15:58.818 233728 DEBUG nova.compute.manager [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:59 np0005539552 kernel: tap1cd399cb-1b (unregistering): left promiscuous mode
Nov 29 03:15:59 np0005539552 NetworkManager[48926]: <info>  [1764404159.1963] device (tap1cd399cb-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.205 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:59Z|00350|binding|INFO|Releasing lport 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e from this chassis (sb_readonly=0)
Nov 29 03:15:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:59Z|00351|binding|INFO|Setting lport 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e down in Southbound
Nov 29 03:15:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:15:59Z|00352|binding|INFO|Removing iface tap1cd399cb-1b ovn-installed in OVS
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.212 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.214 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:d6:24 10.100.0.13'], port_security=['fa:16:3e:74:d6:24 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61d117f5-8412-446a-a3b7-cae3db576105', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '647c3591c2b940409293763c6c83e58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41c3698d-cc27-49de-8078-b06ee82fc1d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ab88e7d-4131-470c-a431-4c951fbab973, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.216 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1cd399cb-1be3-43a4-bbea-c1a0f48fff1e in datapath ab70b036-b5ab-4377-b081-f4b82fdb05c5 unbound from our chassis#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.218 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab70b036-b5ab-4377-b081-f4b82fdb05c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.219 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f09a48d3-2728-41fd-a22c-d3a10fc3a303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.224 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 namespace which is not needed anymore#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.228 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539552 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000065.scope: Consumed 4.385s CPU time.
Nov 29 03:15:59 np0005539552 systemd-machined[196379]: Machine qemu-39-instance-00000065 terminated.
Nov 29 03:15:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:59.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:59 np0005539552 podman[273934]: 2025-11-29 08:15:59.30259434 +0000 UTC m=+0.080591387 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:59 np0005539552 podman[273931]: 2025-11-29 08:15:59.332863543 +0000 UTC m=+0.111931888 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:59 np0005539552 podman[273935]: 2025-11-29 08:15:59.33946667 +0000 UTC m=+0.109059601 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:15:59 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [NOTICE]   (273863) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:59 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [NOTICE]   (273863) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:59 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [WARNING]  (273863) : Exiting Master process...
Nov 29 03:15:59 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [ALERT]    (273863) : Current worker (273866) exited with code 143 (Terminated)
Nov 29 03:15:59 np0005539552 neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5[273856]: [WARNING]  (273863) : All workers exited. Exiting... (0)
Nov 29 03:15:59 np0005539552 systemd[1]: libpod-ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539552 conmon[273856]: conmon ae96ea9f17a1689e9c6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46.scope/container/memory.events
Nov 29 03:15:59 np0005539552 podman[274010]: 2025-11-29 08:15:59.370972437 +0000 UTC m=+0.047489117 container died ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:15:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay-bf51b676101221032d12e4d3c4c37608388bb664f00514d027059de3ebb185ab-merged.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539552 podman[274010]: 2025-11-29 08:15:59.410603962 +0000 UTC m=+0.087120622 container cleanup ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:15:59 np0005539552 systemd[1]: libpod-conmon-ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.451 233728 INFO nova.virt.libvirt.driver [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Instance destroyed successfully.#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.452 233728 DEBUG nova.objects.instance [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lazy-loading 'resources' on Instance uuid 61d117f5-8412-446a-a3b7-cae3db576105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.479 233728 DEBUG nova.virt.libvirt.vif [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-700323681',display_name='tempest-MultipleCreateTestJSON-server-700323681-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-700323681-2',id=101,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T08:15:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='647c3591c2b940409293763c6c83e58c',ramdisk_id='',reservation_id='r-a6zfs0z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-2058984420',owner_user_name='tempest-MultipleCreateTestJSON-2058984420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:55Z,user_data=None,user_id='0bd9df09f1324e3f9dba099f03ffe1c6',uuid=61d117f5-8412-446a-a3b7-cae3db576105,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.480 233728 DEBUG nova.network.os_vif_util [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converting VIF {"id": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "address": "fa:16:3e:74:d6:24", "network": {"id": "ab70b036-b5ab-4377-b081-f4b82fdb05c5", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1993818804-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "647c3591c2b940409293763c6c83e58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd399cb-1b", "ovs_interfaceid": "1cd399cb-1be3-43a4-bbea-c1a0f48fff1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.481 233728 DEBUG nova.network.os_vif_util [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.481 233728 DEBUG os_vif [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.483 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 podman[274048]: 2025-11-29 08:15:59.482977526 +0000 UTC m=+0.049295625 container remove ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.483 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cd399cb-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.486 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.488 233728 INFO os_vif [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:d6:24,bridge_name='br-int',has_traffic_filtering=True,id=1cd399cb-1be3-43a4-bbea-c1a0f48fff1e,network=Network(ab70b036-b5ab-4377-b081-f4b82fdb05c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd399cb-1b')#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.494 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9cc5bd-aa6c-4441-866b-650b4833e760]: (4, ('Sat Nov 29 08:15:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46)\nae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46\nSat Nov 29 08:15:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 (ae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46)\nae96ea9f17a1689e9c6c49c57546d9615f47780a64dcc4f9d25e7c6817488d46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.496 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[92b5abcd-b906-41f9-95d1-e570a3cde86f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.497 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab70b036-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:59 np0005539552 kernel: tapab70b036-b0: left promiscuous mode
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 nova_compute[233724]: 2025-11-29 08:15:59.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.518 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34500588-c2e7-44b1-8d1c-7f0b637beddd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.537 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[18816104-e4a3-469c-a870-40444063b655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.539 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7b74f27f-3b00-4ef1-82a7-4fddad6cf53f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.556 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[247b5621-58b2-4227-ab27-7de0e37ade47]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719661, 'reachable_time': 34310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274091, 'error': None, 'target': 'ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:15:59 np0005539552 systemd[1]: run-netns-ovnmeta\x2dab70b036\x2db5ab\x2d4377\x2db081\x2df4b82fdb05c5.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.570 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab70b036-b5ab-4377-b081-f4b82fdb05c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:15:59.570 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e686d609-6b71-429f-9af6-44c7911804a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:15:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:59.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:00.706 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:00 np0005539552 nova_compute[233724]: 2025-11-29 08:16:00.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:00.708 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.100 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:01.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.887 233728 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-vif-unplugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.888 233728 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.889 233728 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.890 233728 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.890 233728 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] No waiting events found dispatching network-vif-unplugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.891 233728 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-vif-unplugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.891 233728 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.892 233728 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "61d117f5-8412-446a-a3b7-cae3db576105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.893 233728 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.893 233728 DEBUG oslo_concurrency.lockutils [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.894 233728 DEBUG nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] No waiting events found dispatching network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.894 233728 WARNING nova.compute.manager [req-b10e6d00-7fdb-4888-b8f9-d3b037fb2f76 req-cbe3515d-fca8-4fad-86fa-1b2cbbe34dcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received unexpected event network-vif-plugged-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.995 233728 INFO nova.virt.libvirt.driver [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Deleting instance files /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105_del#033[00m
Nov 29 03:16:01 np0005539552 nova_compute[233724]: 2025-11-29 08:16:01.996 233728 INFO nova.virt.libvirt.driver [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Deletion of /var/lib/nova/instances/61d117f5-8412-446a-a3b7-cae3db576105_del complete#033[00m
Nov 29 03:16:02 np0005539552 nova_compute[233724]: 2025-11-29 08:16:02.054 233728 INFO nova.compute.manager [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Took 3.24 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:02 np0005539552 nova_compute[233724]: 2025-11-29 08:16:02.054 233728 DEBUG oslo.service.loopingcall [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:02 np0005539552 nova_compute[233724]: 2025-11-29 08:16:02.055 233728 DEBUG nova.compute.manager [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:02 np0005539552 nova_compute[233724]: 2025-11-29 08:16:02.055 233728 DEBUG nova.network.neutron [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:03.710 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.711 233728 DEBUG nova.network.neutron [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.739 233728 INFO nova.compute.manager [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Took 1.68 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:03.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.840 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.840 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.871 233728 DEBUG nova.compute.manager [req-6dae6efc-3813-4f56-8b87-a66910ae50d1 req-5da29347-2512-4ff0-8aa6-5934cf7c30f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Received event network-vif-deleted-1cd399cb-1be3-43a4-bbea-c1a0f48fff1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.879 233728 DEBUG nova.scheduler.client.report [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.916 233728 DEBUG nova.scheduler.client.report [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.917 233728 DEBUG nova.compute.provider_tree [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.937 233728 DEBUG nova.scheduler.client.report [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:16:03 np0005539552 nova_compute[233724]: 2025-11-29 08:16:03.959 233728 DEBUG nova.scheduler.client.report [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.219 233728 DEBUG oslo_concurrency.processutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.486 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/404452789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.695 233728 DEBUG oslo_concurrency.processutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.701 233728 DEBUG nova.compute.provider_tree [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.719 233728 DEBUG nova.scheduler.client.report [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.742 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.787 233728 INFO nova.scheduler.client.report [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Deleted allocations for instance 61d117f5-8412-446a-a3b7-cae3db576105#033[00m
Nov 29 03:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539552 nova_compute[233724]: 2025-11-29 08:16:04.961 233728 DEBUG oslo_concurrency.lockutils [None req-f4737401-24bc-4a1a-8482-e0b5c8de7cce 0bd9df09f1324e3f9dba099f03ffe1c6 647c3591c2b940409293763c6c83e58c - - default default] Lock "61d117f5-8412-446a-a3b7-cae3db576105" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:05.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 03:16:06 np0005539552 nova_compute[233724]: 2025-11-29 08:16:06.103 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:06 np0005539552 nova_compute[233724]: 2025-11-29 08:16:06.991 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:16:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:07.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:16:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:09.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:09 np0005539552 nova_compute[233724]: 2025-11-29 08:16:09.488 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539552 nova_compute[233724]: 2025-11-29 08:16:09.687 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:09.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:09 np0005539552 kernel: tap3ee6f630-88 (unregistering): left promiscuous mode
Nov 29 03:16:09 np0005539552 NetworkManager[48926]: <info>  [1764404169.9378] device (tap3ee6f630-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:09 np0005539552 nova_compute[233724]: 2025-11-29 08:16:09.947 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:09Z|00353|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=0)
Nov 29 03:16:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:09Z|00354|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down in Southbound
Nov 29 03:16:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:09Z|00355|binding|INFO|Removing iface tap3ee6f630-88 ovn-installed in OVS
Nov 29 03:16:09 np0005539552 nova_compute[233724]: 2025-11-29 08:16:09.949 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:09.967 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:09 np0005539552 nova_compute[233724]: 2025-11-29 08:16:09.967 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:09.968 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:16:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:09.969 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:09.970 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5d8bc3-1637-4721-a480-15bb29d60265]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:10 np0005539552 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 29 03:16:10 np0005539552 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000063.scope: Consumed 14.316s CPU time.
Nov 29 03:16:10 np0005539552 systemd-machined[196379]: Machine qemu-38-instance-00000063 terminated.
Nov 29 03:16:10 np0005539552 kernel: tap3ee6f630-88: entered promiscuous mode
Nov 29 03:16:10 np0005539552 kernel: tap3ee6f630-88 (unregistering): left promiscuous mode
Nov 29 03:16:10 np0005539552 NetworkManager[48926]: <info>  [1764404170.1708] manager: (tap3ee6f630-88): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00356|binding|INFO|Claiming lport 3ee6f630-8898-496e-8d41-aec58022b039 for this chassis.
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00357|binding|INFO|3ee6f630-8898-496e-8d41-aec58022b039: Claiming fa:16:3e:b2:5a:c4 10.100.0.6
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.181 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.182 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 bound to our chassis#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.183 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.184 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4f97e7-d2e3-4c9a-b0d5-700a4bcc9019]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.184 233728 INFO nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.189 233728 INFO nova.virt.libvirt.driver [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance destroyed successfully.#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.190 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'numa_topology' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00358|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 ovn-installed in OVS
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00359|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 up in Southbound
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00360|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=1)
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00361|if_status|INFO|Dropped 1 log messages in last 1531 seconds (most recently, 1531 seconds ago) due to excessive rate
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00362|if_status|INFO|Not setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down as sb is readonly
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00363|binding|INFO|Removing iface tap3ee6f630-88 ovn-installed in OVS
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.197 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00364|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=0)
Nov 29 03:16:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:10Z|00365|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down in Southbound
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.205 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.206 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.207 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:10.208 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8245ea-3e2b-4296-9e4e-2cff94bf4965]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.211 233728 INFO nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Attempting rescue#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.212 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.212 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.216 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.216 233728 INFO nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Creating image(s)#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.239 233728 DEBUG nova.storage.rbd_utils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.243 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.275 233728 DEBUG nova.storage.rbd_utils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.298 233728 DEBUG nova.storage.rbd_utils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.302 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.374 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.375 233728 DEBUG oslo_concurrency.lockutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.376 233728 DEBUG oslo_concurrency.lockutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.376 233728 DEBUG oslo_concurrency.lockutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.400 233728 DEBUG nova.storage.rbd_utils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.404 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.694 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.695 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'migration_context' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.710 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.711 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Start _get_guest_xml network_info=[{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1742590140-network", "vif_mac": "fa:16:3e:b2:5a:c4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4873db8c-b414-4e95-acd9-77caabebe722', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.711 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'resources' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.733 233728 WARNING nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.737 233728 DEBUG nova.virt.libvirt.host [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.738 233728 DEBUG nova.virt.libvirt.host [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.741 233728 DEBUG nova.virt.libvirt.host [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.741 233728 DEBUG nova.virt.libvirt.host [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.742 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.742 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.743 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.743 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.743 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.743 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.744 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.744 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.744 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.744 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.745 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.745 233728 DEBUG nova.virt.hardware [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.745 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:10 np0005539552 nova_compute[233724]: 2025-11-29 08:16:10.759 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:11 np0005539552 nova_compute[233724]: 2025-11-29 08:16:11.105 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3254555464' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:11 np0005539552 nova_compute[233724]: 2025-11-29 08:16:11.180 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:11 np0005539552 nova_compute[233724]: 2025-11-29 08:16:11.181 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:11.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4028105540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:11 np0005539552 nova_compute[233724]: 2025-11-29 08:16:11.637 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:11 np0005539552 nova_compute[233724]: 2025-11-29 08:16:11.638 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3110164704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.110 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.112 233728 DEBUG nova.virt.libvirt.vif [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1556842152',display_name='tempest-ServerRescueTestJSON-server-1556842152',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1556842152',id=99,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:47Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-02j02pz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:47Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=7dde4b1f-b13a-43ab-b40d-343e3c6e143e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1742590140-network", "vif_mac": "fa:16:3e:b2:5a:c4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.112 233728 DEBUG nova.network.os_vif_util [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1742590140-network", "vif_mac": "fa:16:3e:b2:5a:c4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.113 233728 DEBUG nova.network.os_vif_util [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.114 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.127 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <uuid>7dde4b1f-b13a-43ab-b40d-343e3c6e143e</uuid>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <name>instance-00000063</name>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueTestJSON-server-1556842152</nova:name>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:16:10</nova:creationTime>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:user uuid="f4c89c9953854ecf96a802dc6055db9d">tempest-ServerRescueTestJSON-640276387-project-member</nova:user>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:project uuid="4da7fb77734a4135a6f8b5b70bed7a2f">tempest-ServerRescueTestJSON-640276387</nova:project>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <nova:port uuid="3ee6f630-8898-496e-8d41-aec58022b039">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <entry name="serial">7dde4b1f-b13a-43ab-b40d-343e3c6e143e</entry>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <entry name="uuid">7dde4b1f-b13a-43ab-b40d-343e3c6e143e</entry>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.rescue">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config.rescue">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b2:5a:c4"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <target dev="tap3ee6f630-88"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/console.log" append="off"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:16:12 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:16:12 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:16:12 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:16:12 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.139 233728 INFO nova.virt.libvirt.driver [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance destroyed successfully.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.192 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.192 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.192 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.192 233728 DEBUG nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No VIF found with MAC fa:16:3e:b2:5a:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.193 233728 INFO nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Using config drive#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.217 233728 DEBUG nova.storage.rbd_utils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.240 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.240 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.241 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.241 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.241 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.241 233728 WARNING nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.242 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.242 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.242 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.242 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.243 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.243 233728 WARNING nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.243 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.243 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.244 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.244 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.244 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.244 233728 WARNING nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.245 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.245 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.245 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.245 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.246 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.246 233728 WARNING nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.246 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.246 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.247 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.247 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.247 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.247 233728 WARNING nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.248 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.248 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.248 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.248 233728 DEBUG oslo_concurrency.lockutils [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.249 233728 DEBUG nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.249 233728 WARNING nova.compute.manager [req-7674461b-5fa1-4a5b-a4d5-2a56e2091ef2 req-5e95b03b-616a-4acb-92ef-292a339f8f87 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.251 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.281 233728 DEBUG nova.objects.instance [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'keypairs' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.816 233728 INFO nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Creating config drive at /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config.rescue#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.820 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj033g76e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.961 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj033g76e" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.988 233728 DEBUG nova.storage.rbd_utils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:12 np0005539552 nova_compute[233724]: 2025-11-29 08:16:12.992 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config.rescue 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:13 np0005539552 nova_compute[233724]: 2025-11-29 08:16:13.146 233728 DEBUG oslo_concurrency.processutils [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config.rescue 7dde4b1f-b13a-43ab-b40d-343e3c6e143e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:13 np0005539552 nova_compute[233724]: 2025-11-29 08:16:13.147 233728 INFO nova.virt.libvirt.driver [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Deleting local config drive /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:16:13 np0005539552 kernel: tap3ee6f630-88: entered promiscuous mode
Nov 29 03:16:13 np0005539552 NetworkManager[48926]: <info>  [1764404173.1930] manager: (tap3ee6f630-88): new Tun device (/org/freedesktop/NetworkManager/Devices/183)
Nov 29 03:16:13 np0005539552 nova_compute[233724]: 2025-11-29 08:16:13.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:13Z|00366|binding|INFO|Claiming lport 3ee6f630-8898-496e-8d41-aec58022b039 for this chassis.
Nov 29 03:16:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:13Z|00367|binding|INFO|3ee6f630-8898-496e-8d41-aec58022b039: Claiming fa:16:3e:b2:5a:c4 10.100.0.6
Nov 29 03:16:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:13.204 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:13.205 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 bound to our chassis#033[00m
Nov 29 03:16:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:13.205 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:13.206 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[770c11ad-4a54-4e4b-b8c5-4badf92f4414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:13 np0005539552 nova_compute[233724]: 2025-11-29 08:16:13.209 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:13Z|00368|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 ovn-installed in OVS
Nov 29 03:16:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:13Z|00369|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 up in Southbound
Nov 29 03:16:13 np0005539552 nova_compute[233724]: 2025-11-29 08:16:13.212 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:13 np0005539552 nova_compute[233724]: 2025-11-29 08:16:13.214 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:13 np0005539552 systemd-udevd[274669]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:13 np0005539552 systemd-machined[196379]: New machine qemu-40-instance-00000063.
Nov 29 03:16:13 np0005539552 NetworkManager[48926]: <info>  [1764404173.2288] device (tap3ee6f630-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:13 np0005539552 NetworkManager[48926]: <info>  [1764404173.2299] device (tap3ee6f630-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:13 np0005539552 systemd[1]: Started Virtual Machine qemu-40-instance-00000063.
Nov 29 03:16:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:13.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.334 233728 DEBUG nova.compute.manager [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.335 233728 DEBUG oslo_concurrency.lockutils [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.335 233728 DEBUG oslo_concurrency.lockutils [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.335 233728 DEBUG oslo_concurrency.lockutils [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.335 233728 DEBUG nova.compute.manager [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.335 233728 WARNING nova.compute.manager [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.336 233728 DEBUG nova.compute.manager [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.336 233728 DEBUG oslo_concurrency.lockutils [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.336 233728 DEBUG oslo_concurrency.lockutils [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.336 233728 DEBUG oslo_concurrency.lockutils [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.336 233728 DEBUG nova.compute.manager [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.337 233728 WARNING nova.compute.manager [req-e622d479-3051-4956-8a6b-94948466dc6e req-6d399e6e-99e5-48d1-a167-2ad66f50b021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.347 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 7dde4b1f-b13a-43ab-b40d-343e3c6e143e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.348 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404174.347244, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.348 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.353 233728 DEBUG nova.compute.manager [None req-8fce33cd-c6c2-4091-a7f7-2e072611d575 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.382 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.385 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.411 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.412 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404174.3507483, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.412 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.434 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.439 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.450 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404159.4495282, 61d117f5-8412-446a-a3b7-cae3db576105 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.450 233728 INFO nova.compute.manager [-] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.472 233728 DEBUG nova.compute.manager [None req-7dea45b1-811c-4a49-9993-41d52f6823f4 - - - - - -] [instance: 61d117f5-8412-446a-a3b7-cae3db576105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:14 np0005539552 nova_compute[233724]: 2025-11-29 08:16:14.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:15.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:15.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:16 np0005539552 nova_compute[233724]: 2025-11-29 08:16:16.107 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:17.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:17 np0005539552 nova_compute[233724]: 2025-11-29 08:16:17.314 233728 INFO nova.compute.manager [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Unrescuing#033[00m
Nov 29 03:16:17 np0005539552 nova_compute[233724]: 2025-11-29 08:16:17.315 233728 DEBUG oslo_concurrency.lockutils [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:17 np0005539552 nova_compute[233724]: 2025-11-29 08:16:17.315 233728 DEBUG oslo_concurrency.lockutils [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquired lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:17 np0005539552 nova_compute[233724]: 2025-11-29 08:16:17.315 233728 DEBUG nova.network.neutron [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:17.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.492 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.506 233728 DEBUG nova.network.neutron [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Updating instance_info_cache with network_info: [{"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.528 233728 DEBUG oslo_concurrency.lockutils [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Releasing lock "refresh_cache-7dde4b1f-b13a-43ab-b40d-343e3c6e143e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.528 233728 DEBUG nova.objects.instance [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'flavor' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:19 np0005539552 kernel: tap3ee6f630-88 (unregistering): left promiscuous mode
Nov 29 03:16:19 np0005539552 NetworkManager[48926]: <info>  [1764404179.6010] device (tap3ee6f630-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.606 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00370|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=0)
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00371|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down in Southbound
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00372|binding|INFO|Removing iface tap3ee6f630-88 ovn-installed in OVS
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.614 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.615 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.616 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.617 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7e1c1e-d2f5-489a-8e6b-5574f0c45302]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.624 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 29 03:16:19 np0005539552 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000063.scope: Consumed 6.313s CPU time.
Nov 29 03:16:19 np0005539552 systemd-machined[196379]: Machine qemu-40-instance-00000063 terminated.
Nov 29 03:16:19 np0005539552 kernel: tap3ee6f630-88: entered promiscuous mode
Nov 29 03:16:19 np0005539552 kernel: tap3ee6f630-88 (unregistering): left promiscuous mode
Nov 29 03:16:19 np0005539552 NetworkManager[48926]: <info>  [1764404179.7711] manager: (tap3ee6f630-88): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.774 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:19.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.788 233728 INFO nova.virt.libvirt.driver [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance destroyed successfully.#033[00m
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.788 233728 DEBUG nova.objects.instance [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'numa_topology' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:19 np0005539552 kernel: tap3ee6f630-88: entered promiscuous mode
Nov 29 03:16:19 np0005539552 NetworkManager[48926]: <info>  [1764404179.8825] manager: (tap3ee6f630-88): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Nov 29 03:16:19 np0005539552 systemd-udevd[274797]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00373|binding|INFO|Claiming lport 3ee6f630-8898-496e-8d41-aec58022b039 for this chassis.
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00374|binding|INFO|3ee6f630-8898-496e-8d41-aec58022b039: Claiming fa:16:3e:b2:5a:c4 10.100.0.6
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.892 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.893 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 bound to our chassis#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.894 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:19.894 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d83cde3b-1427-4b8f-8701-91de1f3f37b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539552 NetworkManager[48926]: <info>  [1764404179.8965] device (tap3ee6f630-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:19 np0005539552 NetworkManager[48926]: <info>  [1764404179.8977] device (tap3ee6f630-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00375|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 ovn-installed in OVS
Nov 29 03:16:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:19Z|00376|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 up in Southbound
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.900 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.903 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539552 systemd-machined[196379]: New machine qemu-41-instance-00000063.
Nov 29 03:16:19 np0005539552 systemd[1]: Started Virtual Machine qemu-41-instance-00000063.
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:19.999 233728 DEBUG nova.compute.manager [req-5339e9cf-389e-4789-831b-16a34c6272f2 req-f5781351-a285-4ae2-8850-0beb5335a0c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.000 233728 DEBUG oslo_concurrency.lockutils [req-5339e9cf-389e-4789-831b-16a34c6272f2 req-f5781351-a285-4ae2-8850-0beb5335a0c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.000 233728 DEBUG oslo_concurrency.lockutils [req-5339e9cf-389e-4789-831b-16a34c6272f2 req-f5781351-a285-4ae2-8850-0beb5335a0c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.000 233728 DEBUG oslo_concurrency.lockutils [req-5339e9cf-389e-4789-831b-16a34c6272f2 req-f5781351-a285-4ae2-8850-0beb5335a0c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.000 233728 DEBUG nova.compute.manager [req-5339e9cf-389e-4789-831b-16a34c6272f2 req-f5781351-a285-4ae2-8850-0beb5335a0c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.001 233728 WARNING nova.compute.manager [req-5339e9cf-389e-4789-831b-16a34c6272f2 req-f5781351-a285-4ae2-8850-0beb5335a0c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:16:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1838499533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.502 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 7dde4b1f-b13a-43ab-b40d-343e3c6e143e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.503 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404180.5015006, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.503 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.528 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.532 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.566 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.567 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404180.502544, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.568 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.594 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.597 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.615 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:16:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:20.622 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:20.623 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:20.623 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 03:16:20 np0005539552 nova_compute[233724]: 2025-11-29 08:16:20.919 233728 DEBUG nova.compute.manager [None req-3d47d5ac-bb89-4469-93ca-8ef3aaced445 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:21 np0005539552 nova_compute[233724]: 2025-11-29 08:16:21.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:21.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.694 233728 DEBUG nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.694 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.695 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.695 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.695 233728 DEBUG nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.695 233728 WARNING nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.695 233728 DEBUG nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.696 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.696 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.696 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.696 233728 DEBUG nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.696 233728 WARNING nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.697 233728 DEBUG nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.697 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.697 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.697 233728 DEBUG oslo_concurrency.lockutils [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.697 233728 DEBUG nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:22 np0005539552 nova_compute[233724]: 2025-11-29 08:16:22.698 233728 WARNING nova.compute.manager [req-071cb9d6-3cd4-48c3-885f-0c8f3eea563b req-719afa51-1bd3-48fe-8aac-7cc7e2d2edc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:23.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.675 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.676 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.676 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.676 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.677 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.678 233728 INFO nova.compute.manager [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Terminating instance#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.678 233728 DEBUG nova.compute.manager [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:16:23 np0005539552 kernel: tap3ee6f630-88 (unregistering): left promiscuous mode
Nov 29 03:16:23 np0005539552 NetworkManager[48926]: <info>  [1764404183.7122] device (tap3ee6f630-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00377|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=0)
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00378|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down in Southbound
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.722 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00379|binding|INFO|Removing iface tap3ee6f630-88 ovn-installed in OVS
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.725 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.733 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.735 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.736 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.737 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5805bf4-8d9d-443c-a69d-7652224de083]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.743 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 29 03:16:23 np0005539552 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000063.scope: Consumed 3.840s CPU time.
Nov 29 03:16:23 np0005539552 systemd-machined[196379]: Machine qemu-41-instance-00000063 terminated.
Nov 29 03:16:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:23.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:23 np0005539552 kernel: tap3ee6f630-88: entered promiscuous mode
Nov 29 03:16:23 np0005539552 NetworkManager[48926]: <info>  [1764404183.8954] manager: (tap3ee6f630-88): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.895 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00380|binding|INFO|Claiming lport 3ee6f630-8898-496e-8d41-aec58022b039 for this chassis.
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00381|binding|INFO|3ee6f630-8898-496e-8d41-aec58022b039: Claiming fa:16:3e:b2:5a:c4 10.100.0.6
Nov 29 03:16:23 np0005539552 kernel: tap3ee6f630-88 (unregistering): left promiscuous mode
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.906 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.908 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 bound to our chassis#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.908 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.909 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3eeb6b29-bcaf-4ac7-8f82-18463419a9e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.914 233728 INFO nova.virt.libvirt.driver [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Instance destroyed successfully.#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.914 233728 DEBUG nova.objects.instance [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'resources' on Instance uuid 7dde4b1f-b13a-43ab-b40d-343e3c6e143e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.917 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00382|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 ovn-installed in OVS
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00383|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 up in Southbound
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00384|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=1)
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00385|if_status|INFO|Dropped 2 log messages in last 14 seconds (most recently, 14 seconds ago) due to excessive rate
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00386|if_status|INFO|Not setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down as sb is readonly
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.921 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00387|binding|INFO|Removing iface tap3ee6f630-88 ovn-installed in OVS
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.922 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00388|binding|INFO|Releasing lport 3ee6f630-8898-496e-8d41-aec58022b039 from this chassis (sb_readonly=0)
Nov 29 03:16:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:23Z|00389|binding|INFO|Setting lport 3ee6f630-8898-496e-8d41-aec58022b039 down in Southbound
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.931 233728 DEBUG nova.virt.libvirt.vif [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1556842152',display_name='tempest-ServerRescueTestJSON-server-1556842152',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1556842152',id=99,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-02j02pz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:20Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=7dde4b1f-b13a-43ab-b40d-343e3c6e143e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.931 233728 DEBUG nova.network.os_vif_util [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "3ee6f630-8898-496e-8d41-aec58022b039", "address": "fa:16:3e:b2:5a:c4", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee6f630-88", "ovs_interfaceid": "3ee6f630-8898-496e-8d41-aec58022b039", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.932 233728 DEBUG nova.network.os_vif_util [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.932 233728 DEBUG os_vif [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.934 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.935 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ee6f630-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.936 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.937 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:23 np0005539552 nova_compute[233724]: 2025-11-29 08:16:23.940 233728 INFO os_vif [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:5a:c4,bridge_name='br-int',has_traffic_filtering=True,id=3ee6f630-8898-496e-8d41-aec58022b039,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee6f630-88')#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.940 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:5a:c4 10.100.0.6'], port_security=['fa:16:3e:b2:5a:c4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7dde4b1f-b13a-43ab-b40d-343e3c6e143e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3ee6f630-8898-496e-8d41-aec58022b039) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.941 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee6f630-8898-496e-8d41-aec58022b039 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.941 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:23.942 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4880419b-0369-4ee1-a139-06f8ea2674d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.331 233728 INFO nova.virt.libvirt.driver [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Deleting instance files /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_del#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.332 233728 INFO nova.virt.libvirt.driver [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Deletion of /var/lib/nova/instances/7dde4b1f-b13a-43ab-b40d-343e3c6e143e_del complete#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.416 233728 INFO nova.compute.manager [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.416 233728 DEBUG oslo.service.loopingcall [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.417 233728 DEBUG nova.compute.manager [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.417 233728 DEBUG nova.network.neutron [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.931 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.932 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.933 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.933 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.933 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.933 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.934 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.934 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.934 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.934 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.935 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.935 233728 WARNING nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.935 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.935 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.936 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.936 233728 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.936 233728 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:24 np0005539552 nova_compute[233724]: 2025-11-29 08:16:24.936 233728 WARNING nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:25.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.810 233728 DEBUG nova.network.neutron [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.848 233728 INFO nova.compute.manager [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Took 1.43 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.919 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.919 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:25 np0005539552 nova_compute[233724]: 2025-11-29 08:16:25.974 233728 DEBUG oslo_concurrency.processutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.110 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.178 233728 DEBUG nova.compute.manager [req-39dec46d-65dc-4951-b6be-0b61afef5d85 req-a99c3ba3-5648-433a-9d37-9a5b3f8d95e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-deleted-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3591158569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.429 233728 DEBUG oslo_concurrency.processutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.434 233728 DEBUG nova.compute.provider_tree [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.481 233728 DEBUG nova.scheduler.client.report [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.539 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.542 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.542 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.542 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.542 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.613 233728 INFO nova.scheduler.client.report [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Deleted allocations for instance 7dde4b1f-b13a-43ab-b40d-343e3c6e143e#033[00m
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.822 233728 DEBUG oslo_concurrency.lockutils [None req-c854de51-272f-4860-808e-a28ac2088169 f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1456183994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:26 np0005539552 nova_compute[233724]: 2025-11-29 08:16:26.978 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.130 233728 DEBUG nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.130 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.131 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.131 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.131 233728 DEBUG nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.131 233728 WARNING nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.131 233728 DEBUG nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.132 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.132 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.132 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.132 233728 DEBUG nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.132 233728 WARNING nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-unplugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.132 233728 DEBUG nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.133 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.133 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.133 233728 DEBUG oslo_concurrency.lockutils [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7dde4b1f-b13a-43ab-b40d-343e3c6e143e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.133 233728 DEBUG nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] No waiting events found dispatching network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.133 233728 WARNING nova.compute.manager [req-ecf9e313-4c6b-44f9-acbf-0a28f91872b1 req-32f3a200-c75b-4bfe-bc0b-9269f9a76a81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Received unexpected event network-vif-plugged-3ee6f630-8898-496e-8d41-aec58022b039 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.152 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.153 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4426MB free_disk=20.81185531616211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.153 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.153 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.237 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.238 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.283 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:27.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2806176661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.732 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.740 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.783 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.829 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:16:27 np0005539552 nova_compute[233724]: 2025-11-29 08:16:27.829 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:28 np0005539552 nova_compute[233724]: 2025-11-29 08:16:28.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539552 nova_compute[233724]: 2025-11-29 08:16:28.945 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:28 np0005539552 nova_compute[233724]: 2025-11-29 08:16:28.945 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:28 np0005539552 nova_compute[233724]: 2025-11-29 08:16:28.966 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:28 np0005539552 nova_compute[233724]: 2025-11-29 08:16:28.967 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:28 np0005539552 nova_compute[233724]: 2025-11-29 08:16:28.980 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.006 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.090 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.091 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.096 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.097 233728 INFO nova.compute.claims [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.124 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:29.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.325 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:29 np0005539552 podman[275013]: 2025-11-29 08:16:29.713787655 +0000 UTC m=+0.062452969 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:16:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3068443143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:29 np0005539552 podman[275012]: 2025-11-29 08:16:29.740586325 +0000 UTC m=+0.089689480 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:29 np0005539552 podman[275014]: 2025-11-29 08:16:29.747482931 +0000 UTC m=+0.086313240 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.761 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.766 233728 DEBUG nova.compute.provider_tree [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:29.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.803 233728 DEBUG nova.scheduler.client.report [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.854 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.855 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.861 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.869 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.869 233728 INFO nova.compute.claims [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.932 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.933 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:16:29 np0005539552 nova_compute[233724]: 2025-11-29 08:16:29.959 233728 INFO nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.022 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.044 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.106 233728 INFO nova.virt.block_device [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Booting with volume 618924f8-fe57-45de-85ec-593336349be7 at /dev/vda#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.465 233728 DEBUG os_brick.utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.466 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1172092710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.491 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.491 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[09283a43-69c0-4ece-86a5-118be7cab6bf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.492 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.500 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.501 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f85aefb-463b-4ce2-9e22-465448b1dcf5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.502 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.510 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.511 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd540c2b-50fc-432b-b9cf-d60c4c4607b9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.512 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[afe030e4-a9b2-40bd-89d6-9d702466fe75]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.515 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.540 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.543 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.545 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.546 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.546 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.546 233728 DEBUG os_brick.utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.546 233728 DEBUG nova.virt.block_device [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating existing volume attachment record: 49cf7d42-57e3-4052-bb5a-bbc8ab6da5bd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.552 233728 DEBUG nova.compute.provider_tree [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.576 233728 DEBUG nova.scheduler.client.report [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.613 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.614 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.738 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.739 233728 DEBUG nova.network.neutron [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.794 233728 INFO nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.829 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.830 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.830 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.883 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539552 nova_compute[233724]: 2025-11-29 08:16:30.989 233728 DEBUG nova.policy [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2646d924f10246c98f4ee29d496eb0f3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e6235234e63419ead82cbd9a07d500f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.070 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.071 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.072 233728 INFO nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Creating image(s)#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.094 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.118 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.148 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.153 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.187 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.194 233728 DEBUG nova.policy [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f5ee3792ff1a4aa3ab899edb89244703', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3dddf50fa9834b84b8c792205ab0590e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.234 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.235 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.236 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.236 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.262 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.267 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:31.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.557 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101889337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.623 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] resizing rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.728 233728 DEBUG nova.objects.instance [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lazy-loading 'migration_context' on Instance uuid ac865ef4-0329-4b54-b51e-3b0fdb0462d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.754 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.755 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Ensure instance console log exists: /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.755 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.755 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:31 np0005539552 nova_compute[233724]: 2025-11-29 08:16:31.756 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:31.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.205 233728 INFO nova.virt.block_device [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Booting with volume 5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9 at /dev/vdb#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.478 233728 DEBUG nova.network.neutron [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Successfully created port: 7c7e09b9-26df-4135-9c27-6dc6a737f677 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.527 233728 DEBUG os_brick.utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.528 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.538 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.538 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[162e5e0b-f055-4ccc-9044-47de9d9d1ab4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.539 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.546 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.546 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c708da-7735-44dc-ab2e-a0db2b29eab5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.548 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.554 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.554 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d50dc3-cc7f-429c-a680-790f0afc7cd8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.555 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[240db69a-12eb-4aa1-8867-fc038f140d9f]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.555 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.578 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.581 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.581 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.581 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.582 233728 DEBUG os_brick.utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] <== get_connector_properties: return (54ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.582 233728 DEBUG nova.virt.block_device [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating existing volume attachment record: b2bb1101-20a8-4887-ba52-422e4e2fe68b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:16:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.964 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.964 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.965 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.965 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:32 np0005539552 nova_compute[233724]: 2025-11-29 08:16:32.965 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:33 np0005539552 nova_compute[233724]: 2025-11-29 08:16:33.017 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully created port: 082c8475-1c86-43c9-b68c-dafbd502311e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774267824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:33.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:33 np0005539552 nova_compute[233724]: 2025-11-29 08:16:33.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:33 np0005539552 nova_compute[233724]: 2025-11-29 08:16:33.960 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.005 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully created port: d87e0ecc-102f-4db6-b802-352374722987 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.101 233728 INFO nova.virt.block_device [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Booting with volume 414e4eae-2565-42a3-bb35-fdd1ef6e127f at /dev/vdc#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.261 233728 DEBUG nova.network.neutron [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Successfully updated port: 7c7e09b9-26df-4135-9c27-6dc6a737f677 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.282 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "refresh_cache-ac865ef4-0329-4b54-b51e-3b0fdb0462d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.282 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquired lock "refresh_cache-ac865ef4-0329-4b54-b51e-3b0fdb0462d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.283 233728 DEBUG nova.network.neutron [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.289 233728 DEBUG os_brick.utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.291 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.300 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.301 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[76e82466-406e-4952-ba51-7987d49c08d2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.306 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.315 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.315 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[945780fb-4087-41f3-91d3-0c066afb0ca3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.317 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.327 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.327 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[30f0b13f-1517-4b19-bafb-acdb5335f78c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.329 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[2387e677-9de5-42a5-9c70-c383a1016606]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.330 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.373 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.378 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.379 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.379 233728 DEBUG os_brick.initiator.connectors.lightos [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.380 233728 DEBUG os_brick.utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] <== get_connector_properties: return (89ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.381 233728 DEBUG nova.virt.block_device [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating existing volume attachment record: 87251777-4c3d-4b32-b1b1-ffbc41a78285 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.392 233728 DEBUG nova.compute.manager [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-changed-7c7e09b9-26df-4135-9c27-6dc6a737f677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.393 233728 DEBUG nova.compute.manager [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Refreshing instance network info cache due to event network-changed-7c7e09b9-26df-4135-9c27-6dc6a737f677. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.393 233728 DEBUG oslo_concurrency.lockutils [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ac865ef4-0329-4b54-b51e-3b0fdb0462d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.629 233728 DEBUG nova.network.neutron [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.711 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully created port: 341d2ffb-54ae-4f73-b5fa-028f5d68084c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:34 np0005539552 nova_compute[233724]: 2025-11-29 08:16:34.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3076258244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:35.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.590 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.592 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.592 233728 INFO nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Creating image(s)#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.592 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.593 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Ensure instance console log exists: /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.593 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.593 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.593 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:35.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:35 np0005539552 nova_compute[233724]: 2025-11-29 08:16:35.836 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully created port: 120af4a6-f9dd-4b7a-8756-ec894c6253de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:36 np0005539552 nova_compute[233724]: 2025-11-29 08:16:36.116 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:36 np0005539552 nova_compute[233724]: 2025-11-29 08:16:36.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539552 nova_compute[233724]: 2025-11-29 08:16:36.974 233728 DEBUG nova.network.neutron [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Updating instance_info_cache with network_info: [{"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.000 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Releasing lock "refresh_cache-ac865ef4-0329-4b54-b51e-3b0fdb0462d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.000 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Instance network_info: |[{"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.001 233728 DEBUG oslo_concurrency.lockutils [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ac865ef4-0329-4b54-b51e-3b0fdb0462d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.001 233728 DEBUG nova.network.neutron [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Refreshing network info cache for port 7c7e09b9-26df-4135-9c27-6dc6a737f677 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.005 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Start _get_guest_xml network_info=[{"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.010 233728 WARNING nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.015 233728 DEBUG nova.virt.libvirt.host [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.015 233728 DEBUG nova.virt.libvirt.host [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.021 233728 DEBUG nova.virt.libvirt.host [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.022 233728 DEBUG nova.virt.libvirt.host [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.023 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.023 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.024 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.024 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.024 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.024 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.024 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.025 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.025 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.025 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.025 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.025 233728 DEBUG nova.virt.hardware [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.028 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:37.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/301539236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.473 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.500 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.504 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:37.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1226420825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.934 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.936 233728 DEBUG nova.virt.libvirt.vif [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-256449347',display_name='tempest-NoVNCConsoleTestJSON-server-256449347',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-256449347',id=103,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3dddf50fa9834b84b8c792205ab0590e',ramdisk_id='',reservation_id='r-cx6w9850',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-2070418454',owner_user_name='tempest-NoVNCConsoleTestJSON-2070418454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data=None,user_id='f5ee3792ff1a4aa3ab899edb89244703',uuid=ac865ef4-0329-4b54-b51e-3b0fdb0462d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.936 233728 DEBUG nova.network.os_vif_util [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Converting VIF {"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.938 233728 DEBUG nova.network.os_vif_util [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.940 233728 DEBUG nova.objects.instance [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lazy-loading 'pci_devices' on Instance uuid ac865ef4-0329-4b54-b51e-3b0fdb0462d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.965 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <uuid>ac865ef4-0329-4b54-b51e-3b0fdb0462d4</uuid>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <name>instance-00000067</name>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:name>tempest-NoVNCConsoleTestJSON-server-256449347</nova:name>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:16:37</nova:creationTime>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:user uuid="f5ee3792ff1a4aa3ab899edb89244703">tempest-NoVNCConsoleTestJSON-2070418454-project-member</nova:user>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:project uuid="3dddf50fa9834b84b8c792205ab0590e">tempest-NoVNCConsoleTestJSON-2070418454</nova:project>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <nova:port uuid="7c7e09b9-26df-4135-9c27-6dc6a737f677">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <entry name="serial">ac865ef4-0329-4b54-b51e-3b0fdb0462d4</entry>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <entry name="uuid">ac865ef4-0329-4b54-b51e-3b0fdb0462d4</entry>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk.config">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:58:4a:d5"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <target dev="tap7c7e09b9-26"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/console.log" append="off"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:16:37 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:16:37 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:16:37 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:16:37 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.966 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Preparing to wait for external event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.966 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.966 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.967 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.968 233728 DEBUG nova.virt.libvirt.vif [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-256449347',display_name='tempest-NoVNCConsoleTestJSON-server-256449347',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-256449347',id=103,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3dddf50fa9834b84b8c792205ab0590e',ramdisk_id='',reservation_id='r-cx6w9850',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-2070418454',owner_user_name='tempest-NoVNCConsoleTestJSON-2070418454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data=None,user_id='f5ee3792ff1a4aa3ab899edb89244703',uuid=ac865ef4-0329-4b54-b51e-3b0fdb0462d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.968 233728 DEBUG nova.network.os_vif_util [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Converting VIF {"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.969 233728 DEBUG nova.network.os_vif_util [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.969 233728 DEBUG os_vif [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.970 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.970 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.971 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.974 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.975 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c7e09b9-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.975 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c7e09b9-26, col_values=(('external_ids', {'iface-id': '7c7e09b9-26df-4135-9c27-6dc6a737f677', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:4a:d5', 'vm-uuid': 'ac865ef4-0329-4b54-b51e-3b0fdb0462d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.976 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539552 NetworkManager[48926]: <info>  [1764404197.9775] manager: (tap7c7e09b9-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.979 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.983 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539552 nova_compute[233724]: 2025-11-29 08:16:37.983 233728 INFO os_vif [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26')#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.035 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.035 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.036 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] No VIF found with MAC fa:16:3e:58:4a:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.036 233728 INFO nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Using config drive#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.063 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.362 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully created port: b25ceb20-79ac-43b0-8487-65bcf31a0a2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.668 233728 INFO nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Creating config drive at /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/disk.config#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.673 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp82f_0rmm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.808 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp82f_0rmm" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.838 233728 DEBUG nova.storage.rbd_utils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] rbd image ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.841 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/disk.config ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.913 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404183.912016, 7dde4b1f-b13a-43ab-b40d-343e3c6e143e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.913 233728 INFO nova.compute.manager [-] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.964 233728 DEBUG nova.compute.manager [None req-12b43f67-21f5-4c42-9347-6790c7b8a928 - - - - - -] [instance: 7dde4b1f-b13a-43ab-b40d-343e3c6e143e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.986 233728 DEBUG oslo_concurrency.processutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/disk.config ac865ef4-0329-4b54-b51e-3b0fdb0462d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:38 np0005539552 nova_compute[233724]: 2025-11-29 08:16:38.986 233728 INFO nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Deleting local config drive /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:39 np0005539552 kernel: tap7c7e09b9-26: entered promiscuous mode
Nov 29 03:16:39 np0005539552 NetworkManager[48926]: <info>  [1764404199.0492] manager: (tap7c7e09b9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.049 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:39Z|00390|binding|INFO|Claiming lport 7c7e09b9-26df-4135-9c27-6dc6a737f677 for this chassis.
Nov 29 03:16:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:39Z|00391|binding|INFO|7c7e09b9-26df-4135-9c27-6dc6a737f677: Claiming fa:16:3e:58:4a:d5 10.100.0.14
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.059 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.070 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:4a:d5 10.100.0.14'], port_security=['fa:16:3e:58:4a:d5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ac865ef4-0329-4b54-b51e-3b0fdb0462d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95b18418-cbf2-49f0-9943-55139251c2d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dddf50fa9834b84b8c792205ab0590e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '039b8f9f-62c3-42c2-bdba-ea726d54a80a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dba5ec84-8854-4fe0-b5ba-f22095f6f059, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=7c7e09b9-26df-4135-9c27-6dc6a737f677) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.072 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 7c7e09b9-26df-4135-9c27-6dc6a737f677 in datapath 95b18418-cbf2-49f0-9943-55139251c2d2 bound to our chassis#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.074 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95b18418-cbf2-49f0-9943-55139251c2d2#033[00m
Nov 29 03:16:39 np0005539552 systemd-machined[196379]: New machine qemu-42-instance-00000067.
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.087 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdc02b7-dffe-4451-b64d-b6ffc33cf1fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.088 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap95b18418-c1 in ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.090 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap95b18418-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.090 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe81cc4-8fb4-437c-9d03-32d4aadbf346]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.090 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fdef0e17-01a4-4cab-a8dc-6f7b14a2c5ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.101 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e23c96d2-e4e2-469b-80de-57dfb303b90c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 systemd[1]: Started Virtual Machine qemu-42-instance-00000067.
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.126 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a107ed85-87d9-46ad-aa18-aaafa62f3376]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:39Z|00392|binding|INFO|Setting lport 7c7e09b9-26df-4135-9c27-6dc6a737f677 ovn-installed in OVS
Nov 29 03:16:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:39Z|00393|binding|INFO|Setting lport 7c7e09b9-26df-4135-9c27-6dc6a737f677 up in Southbound
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:39 np0005539552 systemd-udevd[275475]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:39 np0005539552 NetworkManager[48926]: <info>  [1764404199.1455] device (tap7c7e09b9-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:39 np0005539552 NetworkManager[48926]: <info>  [1764404199.1464] device (tap7c7e09b9-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.158 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cae818-ef06-42ce-a340-a1d12ca1547c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 systemd-udevd[275482]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.163 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1158a8b2-03ae-4d68-8d4d-0d5ffaf33180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 NetworkManager[48926]: <info>  [1764404199.1643] manager: (tap95b18418-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.197 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d4b942-e0ed-4d5b-b68f-bba3cc2b0620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.200 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[63d5d4a4-dce8-477f-baa1-15299749e1dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 NetworkManager[48926]: <info>  [1764404199.2240] device (tap95b18418-c0): carrier: link connected
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.230 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[df83de34-3cd3-45d3-ad7d-410e4384ee55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.247 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f39774-f9ad-47cb-9017-c61dd2e8ccb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95b18418-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:63:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724491, 'reachable_time': 30537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275505, 'error': None, 'target': 'ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.260 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b959eeeb-4d14-4450-88b8-42815ea18120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:63b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724491, 'tstamp': 724491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275506, 'error': None, 'target': 'ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.279 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a063ac1e-648d-453d-bc63-6a7ceb9b0130]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95b18418-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:63:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724491, 'reachable_time': 30537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275507, 'error': None, 'target': 'ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.310 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1c63092a-2c3c-47fd-a3d5-0767f25709a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:39.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.363 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[35129920-596d-4d68-a3d6-5bffb66fc881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.364 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95b18418-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.365 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.365 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95b18418-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:39 np0005539552 NetworkManager[48926]: <info>  [1764404199.3673] manager: (tap95b18418-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.366 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:39 np0005539552 kernel: tap95b18418-c0: entered promiscuous mode
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.369 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95b18418-c0, col_values=(('external_ids', {'iface-id': '9e4240a5-9efe-4407-ae96-38c5bda6b85a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.370 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:39Z|00394|binding|INFO|Releasing lport 9e4240a5-9efe-4407-ae96-38c5bda6b85a from this chassis (sb_readonly=0)
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.384 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.385 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/95b18418-cbf2-49f0-9943-55139251c2d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/95b18418-cbf2-49f0-9943-55139251c2d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.386 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41e92698-3ad5-49d0-a038-58732a00cb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.387 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-95b18418-cbf2-49f0-9943-55139251c2d2
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/95b18418-cbf2-49f0-9943-55139251c2d2.pid.haproxy
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 95b18418-cbf2-49f0-9943-55139251c2d2
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:16:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:39.388 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2', 'env', 'PROCESS_TAG=haproxy-95b18418-cbf2-49f0-9943-55139251c2d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/95b18418-cbf2-49f0-9943-55139251c2d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.561 233728 DEBUG nova.compute.manager [req-53c6c459-1b22-40f2-aba2-df6ab91f2a56 req-ca361c97-6955-4002-88b2-81cb24b14195 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.561 233728 DEBUG oslo_concurrency.lockutils [req-53c6c459-1b22-40f2-aba2-df6ab91f2a56 req-ca361c97-6955-4002-88b2-81cb24b14195 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.561 233728 DEBUG oslo_concurrency.lockutils [req-53c6c459-1b22-40f2-aba2-df6ab91f2a56 req-ca361c97-6955-4002-88b2-81cb24b14195 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.561 233728 DEBUG oslo_concurrency.lockutils [req-53c6c459-1b22-40f2-aba2-df6ab91f2a56 req-ca361c97-6955-4002-88b2-81cb24b14195 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.562 233728 DEBUG nova.compute.manager [req-53c6c459-1b22-40f2-aba2-df6ab91f2a56 req-ca361c97-6955-4002-88b2-81cb24b14195 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Processing event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.634 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: 082c8475-1c86-43c9-b68c-dafbd502311e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.806 233728 DEBUG nova.compute.manager [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.806 233728 DEBUG nova.compute.manager [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-082c8475-1c86-43c9-b68c-dafbd502311e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.806 233728 DEBUG oslo_concurrency.lockutils [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.807 233728 DEBUG oslo_concurrency.lockutils [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.807 233728 DEBUG nova.network.neutron [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port 082c8475-1c86-43c9-b68c-dafbd502311e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:39 np0005539552 podman[275553]: 2025-11-29 08:16:39.712981372 +0000 UTC m=+0.022929467 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:16:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:39.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:39 np0005539552 podman[275553]: 2025-11-29 08:16:39.820798008 +0000 UTC m=+0.130746103 container create 683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:16:39 np0005539552 systemd[1]: Started libpod-conmon-683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962.scope.
Nov 29 03:16:39 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:16:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a9fbf8f7d7aa9ea1ff9c426d2c16cad5d829b3f8d829668534c9b9255180ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:39 np0005539552 podman[275553]: 2025-11-29 08:16:39.903368947 +0000 UTC m=+0.213317062 container init 683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:16:39 np0005539552 podman[275553]: 2025-11-29 08:16:39.909861101 +0000 UTC m=+0.219809196 container start 683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:39 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [NOTICE]   (275600) : New worker (275602) forked
Nov 29 03:16:39 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [NOTICE]   (275600) : Loading success.
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.987 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404199.9864306, ac865ef4-0329-4b54-b51e-3b0fdb0462d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.987 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.989 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.992 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.995 233728 INFO nova.virt.libvirt.driver [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Instance spawned successfully.#033[00m
Nov 29 03:16:39 np0005539552 nova_compute[233724]: 2025-11-29 08:16:39.995 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.029 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.035 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.041 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.042 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.042 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.043 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.043 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.044 233728 DEBUG nova.virt.libvirt.driver [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.050 233728 DEBUG nova.network.neutron [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Updated VIF entry in instance network info cache for port 7c7e09b9-26df-4135-9c27-6dc6a737f677. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.051 233728 DEBUG nova.network.neutron [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Updating instance_info_cache with network_info: [{"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.091 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.092 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404199.9865453, ac865ef4-0329-4b54-b51e-3b0fdb0462d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.092 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.094 233728 DEBUG oslo_concurrency.lockutils [req-2782b58d-2cf8-4c33-a64d-16d87c34adf1 req-58cbd9f5-bd41-4a89-8e88-6bd6a98dd357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ac865ef4-0329-4b54-b51e-3b0fdb0462d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.113 233728 DEBUG nova.network.neutron [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.133 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.136 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404199.9911015, ac865ef4-0329-4b54-b51e-3b0fdb0462d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.137 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.190 233728 INFO nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Took 9.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.190 233728 DEBUG nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.254 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.257 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.309 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.322 233728 INFO nova.compute.manager [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Took 11.24 seconds to build instance.#033[00m
Nov 29 03:16:40 np0005539552 nova_compute[233724]: 2025-11-29 08:16:40.350 233728 DEBUG oslo_concurrency.lockutils [None req-432ae445-71d3-4df4-81e3-b354a5f36023 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.398 233728 DEBUG nova.network.neutron [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.418 233728 DEBUG oslo_concurrency.lockutils [req-7f9d2338-3b19-4d5a-8c9a-6c62e54ea5fe req-8c34bbcf-1cfb-472f-a5d0-44929073d1ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:41.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.836 233728 DEBUG nova.compute.manager [req-e53a533b-c15a-4ffb-8dd2-acb40e711359 req-34905b36-83b1-44b4-b3fd-0ff23d9225e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.836 233728 DEBUG oslo_concurrency.lockutils [req-e53a533b-c15a-4ffb-8dd2-acb40e711359 req-34905b36-83b1-44b4-b3fd-0ff23d9225e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.837 233728 DEBUG oslo_concurrency.lockutils [req-e53a533b-c15a-4ffb-8dd2-acb40e711359 req-34905b36-83b1-44b4-b3fd-0ff23d9225e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.837 233728 DEBUG oslo_concurrency.lockutils [req-e53a533b-c15a-4ffb-8dd2-acb40e711359 req-34905b36-83b1-44b4-b3fd-0ff23d9225e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.837 233728 DEBUG nova.compute.manager [req-e53a533b-c15a-4ffb-8dd2-acb40e711359 req-34905b36-83b1-44b4-b3fd-0ff23d9225e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] No waiting events found dispatching network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:41 np0005539552 nova_compute[233724]: 2025-11-29 08:16:41.838 233728 WARNING nova.compute.manager [req-e53a533b-c15a-4ffb-8dd2-acb40e711359 req-34905b36-83b1-44b4-b3fd-0ff23d9225e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received unexpected event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:42 np0005539552 nova_compute[233724]: 2025-11-29 08:16:42.615 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: eb187f09-9e48-4b8c-9111-744004bcec05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.018 233728 DEBUG nova.compute.manager [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-eb187f09-9e48-4b8c-9111-744004bcec05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.018 233728 DEBUG nova.compute.manager [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-eb187f09-9e48-4b8c-9111-744004bcec05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.019 233728 DEBUG oslo_concurrency.lockutils [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.019 233728 DEBUG oslo_concurrency.lockutils [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.019 233728 DEBUG nova.network.neutron [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port eb187f09-9e48-4b8c-9111-744004bcec05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.020 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:16:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:43.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:16:43 np0005539552 nova_compute[233724]: 2025-11-29 08:16:43.362 233728 DEBUG nova.network.neutron [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:44 np0005539552 nova_compute[233724]: 2025-11-29 08:16:44.404 233728 DEBUG nova.compute.manager [None req-c4fed2bb-7c71-408f-9ff8-a53350310a7b f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Nov 29 03:16:44 np0005539552 nova_compute[233724]: 2025-11-29 08:16:44.593 233728 DEBUG nova.network.neutron [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:44 np0005539552 nova_compute[233724]: 2025-11-29 08:16:44.622 233728 DEBUG oslo_concurrency.lockutils [req-0a641126-03ac-4596-9bfe-28e979d40c69 req-89380eaf-016c-481a-a131-edeb60bd9310 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:44 np0005539552 nova_compute[233724]: 2025-11-29 08:16:44.829 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: 580c06c8-3541-4c20-b8a7-c02c5f9efe4b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.170 233728 DEBUG nova.compute.manager [None req-507a5d58-1a1c-4c95-8dde-2556b1c30840 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Nov 29 03:16:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.399 233728 DEBUG nova.compute.manager [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-580c06c8-3541-4c20-b8a7-c02c5f9efe4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.400 233728 DEBUG nova.compute.manager [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-580c06c8-3541-4c20-b8a7-c02c5f9efe4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.400 233728 DEBUG oslo_concurrency.lockutils [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.401 233728 DEBUG oslo_concurrency.lockutils [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.401 233728 DEBUG nova.network.neutron [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port 580c06c8-3541-4c20-b8a7-c02c5f9efe4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.705 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.706 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.706 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.706 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.706 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.707 233728 INFO nova.compute.manager [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Terminating instance#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.708 233728 DEBUG nova.compute.manager [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:16:45 np0005539552 kernel: tap7c7e09b9-26 (unregistering): left promiscuous mode
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.751 233728 DEBUG nova.network.neutron [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:45 np0005539552 NetworkManager[48926]: <info>  [1764404205.7527] device (tap7c7e09b9-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.765 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:45Z|00395|binding|INFO|Releasing lport 7c7e09b9-26df-4135-9c27-6dc6a737f677 from this chassis (sb_readonly=0)
Nov 29 03:16:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:45Z|00396|binding|INFO|Setting lport 7c7e09b9-26df-4135-9c27-6dc6a737f677 down in Southbound
Nov 29 03:16:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:16:45Z|00397|binding|INFO|Removing iface tap7c7e09b9-26 ovn-installed in OVS
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.767 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:45.783 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:4a:d5 10.100.0.14'], port_security=['fa:16:3e:58:4a:d5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ac865ef4-0329-4b54-b51e-3b0fdb0462d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95b18418-cbf2-49f0-9943-55139251c2d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dddf50fa9834b84b8c792205ab0590e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '039b8f9f-62c3-42c2-bdba-ea726d54a80a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dba5ec84-8854-4fe0-b5ba-f22095f6f059, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=7c7e09b9-26df-4135-9c27-6dc6a737f677) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:45.785 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 7c7e09b9-26df-4135-9c27-6dc6a737f677 in datapath 95b18418-cbf2-49f0-9943-55139251c2d2 unbound from our chassis#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.786 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:45.787 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95b18418-cbf2-49f0-9943-55139251c2d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:16:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:45.788 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea3149f-b1ac-4b37-8ee3-141a6b66d8e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:45.788 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2 namespace which is not needed anymore#033[00m
Nov 29 03:16:45 np0005539552 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 29 03:16:45 np0005539552 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000067.scope: Consumed 6.582s CPU time.
Nov 29 03:16:45 np0005539552 systemd-machined[196379]: Machine qemu-42-instance-00000067 terminated.
Nov 29 03:16:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:45.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:45 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [NOTICE]   (275600) : haproxy version is 2.8.14-c23fe91
Nov 29 03:16:45 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [NOTICE]   (275600) : path to executable is /usr/sbin/haproxy
Nov 29 03:16:45 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [WARNING]  (275600) : Exiting Master process...
Nov 29 03:16:45 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [ALERT]    (275600) : Current worker (275602) exited with code 143 (Terminated)
Nov 29 03:16:45 np0005539552 neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2[275588]: [WARNING]  (275600) : All workers exited. Exiting... (0)
Nov 29 03:16:45 np0005539552 systemd[1]: libpod-683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962.scope: Deactivated successfully.
Nov 29 03:16:45 np0005539552 podman[275639]: 2025-11-29 08:16:45.930155188 +0000 UTC m=+0.049317036 container died 683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.949 233728 INFO nova.virt.libvirt.driver [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Instance destroyed successfully.#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.949 233728 DEBUG nova.objects.instance [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lazy-loading 'resources' on Instance uuid ac865ef4-0329-4b54-b51e-3b0fdb0462d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962-userdata-shm.mount: Deactivated successfully.
Nov 29 03:16:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay-89a9fbf8f7d7aa9ea1ff9c426d2c16cad5d829b3f8d829668534c9b9255180ab-merged.mount: Deactivated successfully.
Nov 29 03:16:45 np0005539552 podman[275639]: 2025-11-29 08:16:45.962847436 +0000 UTC m=+0.082009264 container cleanup 683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.968 233728 DEBUG nova.virt.libvirt.vif [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-256449347',display_name='tempest-NoVNCConsoleTestJSON-server-256449347',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-256449347',id=103,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3dddf50fa9834b84b8c792205ab0590e',ramdisk_id='',reservation_id='r-cx6w9850',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-NoVNCConsoleTestJSON-2070418454',owner_user_name='tempest-NoVNCConsoleTestJSON-2070418454-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:40Z,user_data=None,user_id='f5ee3792ff1a4aa3ab899edb89244703',uuid=ac865ef4-0329-4b54-b51e-3b0fdb0462d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.968 233728 DEBUG nova.network.os_vif_util [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Converting VIF {"id": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "address": "fa:16:3e:58:4a:d5", "network": {"id": "95b18418-cbf2-49f0-9943-55139251c2d2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-568647385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dddf50fa9834b84b8c792205ab0590e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c7e09b9-26", "ovs_interfaceid": "7c7e09b9-26df-4135-9c27-6dc6a737f677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.969 233728 DEBUG nova.network.os_vif_util [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.969 233728 DEBUG os_vif [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.971 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.971 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c7e09b9-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.973 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:45 np0005539552 nova_compute[233724]: 2025-11-29 08:16:45.979 233728 INFO os_vif [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:4a:d5,bridge_name='br-int',has_traffic_filtering=True,id=7c7e09b9-26df-4135-9c27-6dc6a737f677,network=Network(95b18418-cbf2-49f0-9943-55139251c2d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c7e09b9-26')#033[00m
Nov 29 03:16:45 np0005539552 systemd[1]: libpod-conmon-683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962.scope: Deactivated successfully.
Nov 29 03:16:46 np0005539552 podman[275682]: 2025-11-29 08:16:46.032492447 +0000 UTC m=+0.043591512 container remove 683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.039 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b55335a4-57f7-4574-9640-edc8ed1bd637]: (4, ('Sat Nov 29 08:16:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2 (683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962)\n683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962\nSat Nov 29 08:16:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2 (683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962)\n683769ae2e3c36b4548891439198d434dcd0a88cbdaf135c84a6166200077962\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.042 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6e1c55-900c-41ad-bc1c-cf34f8d5c5ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.043 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95b18418-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:46 np0005539552 kernel: tap95b18418-c0: left promiscuous mode
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.046 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.062 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.067 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9461c6-cb79-4595-8aaa-9cf28cf67567]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.080 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ca3c21-b124-4415-818c-75e70a3cc041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.081 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0af99f08-5c0c-4027-834a-d11dfe7dd9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.096 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ce609861-fc34-4b18-a8a0-8ccba8db3ea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724484, 'reachable_time': 41634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275713, 'error': None, 'target': 'ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 systemd[1]: run-netns-ovnmeta\x2d95b18418\x2dcbf2\x2d49f0\x2d9943\x2d55139251c2d2.mount: Deactivated successfully.
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.100 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-95b18418-cbf2-49f0-9943-55139251c2d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:16:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:16:46.100 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[09b82333-2aee-481d-a14f-aa569a87e856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.118 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.138 233728 DEBUG nova.compute.manager [req-bf7cf9fc-470a-4c83-8621-11e65d0f938d req-79547caf-c4cb-409a-826e-3852ae920a41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-vif-unplugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.138 233728 DEBUG oslo_concurrency.lockutils [req-bf7cf9fc-470a-4c83-8621-11e65d0f938d req-79547caf-c4cb-409a-826e-3852ae920a41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.139 233728 DEBUG oslo_concurrency.lockutils [req-bf7cf9fc-470a-4c83-8621-11e65d0f938d req-79547caf-c4cb-409a-826e-3852ae920a41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.139 233728 DEBUG oslo_concurrency.lockutils [req-bf7cf9fc-470a-4c83-8621-11e65d0f938d req-79547caf-c4cb-409a-826e-3852ae920a41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.139 233728 DEBUG nova.compute.manager [req-bf7cf9fc-470a-4c83-8621-11e65d0f938d req-79547caf-c4cb-409a-826e-3852ae920a41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] No waiting events found dispatching network-vif-unplugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.139 233728 DEBUG nova.compute.manager [req-bf7cf9fc-470a-4c83-8621-11e65d0f938d req-79547caf-c4cb-409a-826e-3852ae920a41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-vif-unplugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.282 233728 DEBUG nova.network.neutron [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.298 233728 DEBUG oslo_concurrency.lockutils [req-cea8ffc1-a30d-456f-9230-326f891aa6d5 req-15cb7ce3-8ddf-4cae-adcd-bbba969d2671 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.370 233728 INFO nova.virt.libvirt.driver [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Deleting instance files /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4_del#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.371 233728 INFO nova.virt.libvirt.driver [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Deletion of /var/lib/nova/instances/ac865ef4-0329-4b54-b51e-3b0fdb0462d4_del complete#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.437 233728 INFO nova.compute.manager [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.438 233728 DEBUG oslo.service.loopingcall [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.438 233728 DEBUG nova.compute.manager [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.438 233728 DEBUG nova.network.neutron [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:46 np0005539552 nova_compute[233724]: 2025-11-29 08:16:46.913 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: d87e0ecc-102f-4db6-b802-352374722987 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:47.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.334 233728 DEBUG nova.compute.manager [req-c855cdcd-5fa1-4548-acd7-4ab89c5c043b req-95367c3d-25fc-4740-8352-5104dd9c8ad6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.334 233728 DEBUG oslo_concurrency.lockutils [req-c855cdcd-5fa1-4548-acd7-4ab89c5c043b req-95367c3d-25fc-4740-8352-5104dd9c8ad6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.335 233728 DEBUG oslo_concurrency.lockutils [req-c855cdcd-5fa1-4548-acd7-4ab89c5c043b req-95367c3d-25fc-4740-8352-5104dd9c8ad6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.335 233728 DEBUG oslo_concurrency.lockutils [req-c855cdcd-5fa1-4548-acd7-4ab89c5c043b req-95367c3d-25fc-4740-8352-5104dd9c8ad6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.335 233728 DEBUG nova.compute.manager [req-c855cdcd-5fa1-4548-acd7-4ab89c5c043b req-95367c3d-25fc-4740-8352-5104dd9c8ad6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] No waiting events found dispatching network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.335 233728 WARNING nova.compute.manager [req-c855cdcd-5fa1-4548-acd7-4ab89c5c043b req-95367c3d-25fc-4740-8352-5104dd9c8ad6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received unexpected event network-vif-plugged-7c7e09b9-26df-4135-9c27-6dc6a737f677 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.625 233728 DEBUG nova.network.neutron [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.648 233728 INFO nova.compute.manager [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Took 2.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.712 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.713 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.854 233728 DEBUG oslo_concurrency.processutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.988 233728 DEBUG nova.compute.manager [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-d87e0ecc-102f-4db6-b802-352374722987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.988 233728 DEBUG nova.compute.manager [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-d87e0ecc-102f-4db6-b802-352374722987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.989 233728 DEBUG oslo_concurrency.lockutils [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.989 233728 DEBUG oslo_concurrency.lockutils [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:48 np0005539552 nova_compute[233724]: 2025-11-29 08:16:48.989 233728 DEBUG nova.network.neutron [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port d87e0ecc-102f-4db6-b802-352374722987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.171 233728 DEBUG nova.compute.manager [req-be8206d7-f597-49bd-bee8-92e3d57f1f09 req-03d9a3c7-837c-4d8a-b9d1-2165e12866ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Received event network-vif-deleted-7c7e09b9-26df-4135-9c27-6dc6a737f677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1249739077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.315 233728 DEBUG oslo_concurrency.processutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.321 233728 DEBUG nova.compute.provider_tree [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:49.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.344 233728 DEBUG nova.scheduler.client.report [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.350 233728 DEBUG nova.network.neutron [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.407 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.411 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: 341d2ffb-54ae-4f73-b5fa-028f5d68084c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.466 233728 INFO nova.scheduler.client.report [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Deleted allocations for instance ac865ef4-0329-4b54-b51e-3b0fdb0462d4#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.602 233728 DEBUG oslo_concurrency.lockutils [None req-51fc6459-0c2a-46f0-8cfd-076237b937b5 f5ee3792ff1a4aa3ab899edb89244703 3dddf50fa9834b84b8c792205ab0590e - - default default] Lock "ac865ef4-0329-4b54-b51e-3b0fdb0462d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.911 233728 DEBUG nova.network.neutron [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:49 np0005539552 nova_compute[233724]: 2025-11-29 08:16:49.933 233728 DEBUG oslo_concurrency.lockutils [req-d4f8fbb5-8765-41b3-8310-400ec4d0cdac req-d6ac2b3e-5a91-45dd-970e-7a1eb7156c9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.020 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.089 233728 DEBUG nova.compute.manager [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-341d2ffb-54ae-4f73-b5fa-028f5d68084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.090 233728 DEBUG nova.compute.manager [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-341d2ffb-54ae-4f73-b5fa-028f5d68084c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.090 233728 DEBUG oslo_concurrency.lockutils [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.091 233728 DEBUG oslo_concurrency.lockutils [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.091 233728 DEBUG nova.network.neutron [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port 341d2ffb-54ae-4f73-b5fa-028f5d68084c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.121 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.269 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: 120af4a6-f9dd-4b7a-8756-ec894c6253de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:51.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:51 np0005539552 nova_compute[233724]: 2025-11-29 08:16:51.398 233728 DEBUG nova.network.neutron [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:51.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:52 np0005539552 nova_compute[233724]: 2025-11-29 08:16:52.605 233728 DEBUG nova.network.neutron [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:52 np0005539552 nova_compute[233724]: 2025-11-29 08:16:52.629 233728 DEBUG oslo_concurrency.lockutils [req-6e026b2d-a9ce-40ad-beb0-693abc03999a req-070ea81d-8d11-4015-b7a9-98cab1343b94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.066 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Successfully updated port: b25ceb20-79ac-43b0-8487-65bcf31a0a2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.080 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.081 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.081 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.198 233728 DEBUG nova.compute.manager [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-120af4a6-f9dd-4b7a-8756-ec894c6253de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.199 233728 DEBUG nova.compute.manager [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-120af4a6-f9dd-4b7a-8756-ec894c6253de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.200 233728 DEBUG oslo_concurrency.lockutils [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:53.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:53 np0005539552 nova_compute[233724]: 2025-11-29 08:16:53.351 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:53.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:55.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:56 np0005539552 nova_compute[233724]: 2025-11-29 08:16:56.024 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:56 np0005539552 nova_compute[233724]: 2025-11-29 08:16:56.122 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:59 np0005539552 nova_compute[233724]: 2025-11-29 08:16:59.208 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:16:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:59 np0005539552 podman[275800]: 2025-11-29 08:16:59.964451742 +0000 UTC m=+0.052015747 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:16:59 np0005539552 podman[275799]: 2025-11-29 08:16:59.994268003 +0000 UTC m=+0.082105656 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:17:00 np0005539552 podman[275801]: 2025-11-29 08:17:00.019397918 +0000 UTC m=+0.102919385 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:17:00 np0005539552 nova_compute[233724]: 2025-11-29 08:17:00.946 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404205.9448626, ac865ef4-0329-4b54-b51e-3b0fdb0462d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:00 np0005539552 nova_compute[233724]: 2025-11-29 08:17:00.946 233728 INFO nova.compute.manager [-] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:00 np0005539552 nova_compute[233724]: 2025-11-29 08:17:00.969 233728 DEBUG nova.compute.manager [None req-73dfd502-515b-4cfa-80f9-1851255f1d41 - - - - - -] [instance: ac865ef4-0329-4b54-b51e-3b0fdb0462d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:01 np0005539552 nova_compute[233724]: 2025-11-29 08:17:01.027 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:01 np0005539552 nova_compute[233724]: 2025-11-29 08:17:01.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:01.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:03.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:04 np0005539552 nova_compute[233724]: 2025-11-29 08:17:04.035 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:04.034 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:04.037 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:17:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:06.040 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.067 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.130 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:06 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.631 233728 DEBUG nova.network.neutron [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.661 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.661 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance network_info: |[{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.662 233728 DEBUG oslo_concurrency.lockutils [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.662 233728 DEBUG nova.network.neutron [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port 120af4a6-f9dd-4b7a-8756-ec894c6253de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.669 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Start _get_guest_xml network_info=[{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T0
Nov 29 03:17:06 np0005539552 nova_compute[233724]: in_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-618924f8-fe57-45de-85ec-593336349be7', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '618924f8-fe57-45de-85ec-593336349be7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'attached_at': '', 'detached_at': '', 'volume_id': '618924f8-fe57-45de-85ec-593336349be7', 'serial': '618924f8-fe57-45de-85ec-593336349be7'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '49cf7d42-57e3-4052-bb5a-bbc8ab6da5bd', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'attached_at': '', 'detached_at': '', 'volume_id': '5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9', 'serial': '5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': 1, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': 'b2bb1101-20a8-4887-ba52-422e4e2fe68b', 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-414e4eae-2565-42a3-bb35-fdd1ef6e127f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '414e4eae-2565-42a3-bb35-fdd1ef6e127f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'attached_at': '', 'detached_at': '', 'volume_id': '414e4eae-2565-42a3-bb35-fdd1ef6e127f', 'serial': '414e4eae-2565-42a3-bb35-fdd1ef6e127f'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': 2, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdc', 'attachment_id': '87251777-4c3d-4b32-b1b1-ffbc41a78285', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.674 233728 WARNING nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.699 233728 DEBUG nova.virt.libvirt.host [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.700 233728 DEBUG nova.virt.libvirt.host [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.715 233728 DEBUG nova.virt.libvirt.host [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.716 233728 DEBUG nova.virt.libvirt.host [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.717 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.717 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.718 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.718 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.718 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.719 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.719 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.719 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.719 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.719 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.719 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.720 233728 DEBUG nova.virt.hardware [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.747 233728 DEBUG nova.storage.rbd_utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] rbd image ad6070e8-74bc-4df7-9c2d-5da5da175238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:06 np0005539552 nova_compute[233724]: 2025-11-29 08:17:06.751 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:07 np0005539552 rsyslogd[1004]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 08:17:06.669 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 03:17:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:07 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/249430449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.193 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.297 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.298 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.299 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.300 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.300 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.301 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.302 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.302 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.303 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.303 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.304 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.304 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.305 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.305 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.306 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.307 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.307 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.307 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.308 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.309 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.309 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.310 233728 DEBUG nova.objects.instance [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lazy-loading 'pci_devices' on Instance uuid ad6070e8-74bc-4df7-9c2d-5da5da175238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.339 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <uuid>ad6070e8-74bc-4df7-9c2d-5da5da175238</uuid>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <name>instance-00000066</name>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:name>tempest-device-tagging-server-294430620</nova:name>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:17:06</nova:creationTime>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:user uuid="2646d924f10246c98f4ee29d496eb0f3">tempest-TaggedBootDevicesTest-1166451646-project-member</nova:user>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:project uuid="9e6235234e63419ead82cbd9a07d500f">tempest-TaggedBootDevicesTest-1166451646</nova:project>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="082c8475-1c86-43c9-b68c-dafbd502311e">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="eb187f09-9e48-4b8c-9111-744004bcec05">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.1.1.254" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="580c06c8-3541-4c20-b8a7-c02c5f9efe4b">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.1.1.28" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="d87e0ecc-102f-4db6-b802-352374722987">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.1.1.203" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="341d2ffb-54ae-4f73-b5fa-028f5d68084c">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.1.1.209" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="120af4a6-f9dd-4b7a-8756-ec894c6253de">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <nova:port uuid="b25ceb20-79ac-43b0-8487-65bcf31a0a2f">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <entry name="serial">ad6070e8-74bc-4df7-9c2d-5da5da175238</entry>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <entry name="uuid">ad6070e8-74bc-4df7-9c2d-5da5da175238</entry>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ad6070e8-74bc-4df7-9c2d-5da5da175238_disk.config">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-618924f8-fe57-45de-85ec-593336349be7">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <serial>618924f8-fe57-45de-85ec-593336349be7</serial>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <serial>5c8ade3d-ec33-464b-bbb2-0e63ad58b8e9</serial>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-414e4eae-2565-42a3-bb35-fdd1ef6e127f">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="vdc" bus="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <serial>414e4eae-2565-42a3-bb35-fdd1ef6e127f</serial>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:bf:be:93"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tap082c8475-1c"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b0:ef:da"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tapeb187f09-9e"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:1a:e2:8b"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tap580c06c8-35"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:dd:61:ae"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tapd87e0ecc-10"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:0a:5b:43"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tap341d2ffb-54"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:56:50:8c"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tap120af4a6-f9"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:42:2b:da"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <target dev="tapb25ceb20-79"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/console.log" append="off"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:17:07 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:17:07 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:17:07 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:17:07 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.340 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.340 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.340 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.341 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.341 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.341 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.342 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.342 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.342 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-580c06c8-3541-4c20-b8a7-c02c5f9efe4b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.343 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.343 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.343 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.343 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.344 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.344 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.344 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.344 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.345 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.345 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.345 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.345 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.345 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.346 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.346 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.346 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Preparing to wait for external event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.346 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.347 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.347 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.348 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.348 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.349 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.349 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.350 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.350 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.351 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:17:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:07.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.354 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.355 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap082c8475-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.355 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap082c8475-1c, col_values=(('external_ids', {'iface-id': '082c8475-1c86-43c9-b68c-dafbd502311e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:be:93', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.405 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.4062] manager: (tap082c8475-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.407 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.413 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.414 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.415 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.415 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.416 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.416 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.417 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.417 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.417 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.419 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.420 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb187f09-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.420 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb187f09-9e, col_values=(('external_ids', {'iface-id': 'eb187f09-9e48-4b8c-9111-744004bcec05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:ef:da', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.4239] manager: (tapeb187f09-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.461 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.462 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.462 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.463 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.463 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.463 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.464 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.464 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.466 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580c06c8-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.467 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap580c06c8-35, col_values=(('external_ids', {'iface-id': '580c06c8-3541-4c20-b8a7-c02c5f9efe4b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:e2:8b', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.468 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.4689] manager: (tap580c06c8-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.478 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.478 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.479 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.479 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.479 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.480 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.480 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.482 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd87e0ecc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.483 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd87e0ecc-10, col_values=(('external_ids', {'iface-id': 'd87e0ecc-102f-4db6-b802-352374722987', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:61:ae', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.484 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.4848] manager: (tapd87e0ecc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.486 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.500 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.501 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.501 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.502 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.502 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.502 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.502 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.503 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.505 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap341d2ffb-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.505 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap341d2ffb-54, col_values=(('external_ids', {'iface-id': '341d2ffb-54ae-4f73-b5fa-028f5d68084c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:5b:43', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.5078] manager: (tap341d2ffb-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.523 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.525 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.526 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.526 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.527 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.527 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.527 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.528 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.528 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.530 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.530 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap120af4a6-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.531 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap120af4a6-f9, col_values=(('external_ids', {'iface-id': '120af4a6-f9dd-4b7a-8756-ec894c6253de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:50:8c', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.532 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.5328] manager: (tap120af4a6-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.535 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.546 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.548 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.549 233728 DEBUG nova.virt.libvirt.vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.549 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.550 233728 DEBUG nova.network.os_vif_util [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.550 233728 DEBUG os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.550 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.551 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.551 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.553 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb25ceb20-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.554 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb25ceb20-79, col_values=(('external_ids', {'iface-id': 'b25ceb20-79ac-43b0-8487-65bcf31a0a2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:2b:da', 'vm-uuid': 'ad6070e8-74bc-4df7-9c2d-5da5da175238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.555 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 NetworkManager[48926]: <info>  [1764404227.5563] manager: (tapb25ceb20-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.573 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.574 233728 INFO os_vif [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79')#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.700 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.701 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.701 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] No VIF found with MAC fa:16:3e:bf:be:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.702 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] No VIF found with MAC fa:16:3e:0a:5b:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.702 233728 INFO nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Using config drive#033[00m
Nov 29 03:17:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:07 np0005539552 nova_compute[233724]: 2025-11-29 08:17:07.729 233728 DEBUG nova.storage.rbd_utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] rbd image ad6070e8-74bc-4df7-9c2d-5da5da175238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.536 233728 INFO nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Creating config drive at /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/disk.config#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.542 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ql4h4j5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.675 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ql4h4j5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.699 233728 DEBUG nova.storage.rbd_utils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] rbd image ad6070e8-74bc-4df7-9c2d-5da5da175238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.702 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/disk.config ad6070e8-74bc-4df7-9c2d-5da5da175238_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.847 233728 DEBUG nova.network.neutron [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updated VIF entry in instance network info cache for port 120af4a6-f9dd-4b7a-8756-ec894c6253de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.848 233728 DEBUG nova.network.neutron [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.860 233728 DEBUG oslo_concurrency.processutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/disk.config ad6070e8-74bc-4df7-9c2d-5da5da175238_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.861 233728 INFO nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Deleting local config drive /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.873 233728 DEBUG oslo_concurrency.lockutils [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.873 233728 DEBUG nova.compute.manager [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-b25ceb20-79ac-43b0-8487-65bcf31a0a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.874 233728 DEBUG nova.compute.manager [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-b25ceb20-79ac-43b0-8487-65bcf31a0a2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.874 233728 DEBUG oslo_concurrency.lockutils [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.874 233728 DEBUG oslo_concurrency.lockutils [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.874 233728 DEBUG nova.network.neutron [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port b25ceb20-79ac-43b0-8487-65bcf31a0a2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9114] manager: (tap082c8475-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Nov 29 03:17:08 np0005539552 kernel: tapeb187f09-9e: entered promiscuous mode
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9252] manager: (tapeb187f09-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Nov 29 03:17:08 np0005539552 kernel: tap082c8475-1c: entered promiscuous mode
Nov 29 03:17:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:08Z|00398|binding|INFO|Claiming lport 082c8475-1c86-43c9-b68c-dafbd502311e for this chassis.
Nov 29 03:17:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:08Z|00399|binding|INFO|082c8475-1c86-43c9-b68c-dafbd502311e: Claiming fa:16:3e:bf:be:93 10.100.0.8
Nov 29 03:17:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:08Z|00400|binding|INFO|Claiming lport eb187f09-9e48-4b8c-9111-744004bcec05 for this chassis.
Nov 29 03:17:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:08Z|00401|binding|INFO|eb187f09-9e48-4b8c-9111-744004bcec05: Claiming fa:16:3e:b0:ef:da 10.1.1.254
Nov 29 03:17:08 np0005539552 nova_compute[233724]: 2025-11-29 08:17:08.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:08 np0005539552 kernel: tap580c06c8-35: entered promiscuous mode
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9407] manager: (tap580c06c8-35): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Nov 29 03:17:08 np0005539552 systemd-udevd[276018]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:08 np0005539552 systemd-udevd[276019]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:08 np0005539552 systemd-udevd[276021]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.947 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:ef:da 10.1.1.254'], port_security=['fa:16:3e:b0:ef:da 10.1.1.254'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-58017298', 'neutron:cidrs': '10.1.1.254/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-58017298', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4a7ebb69-572b-44c7-b6ca-83fcf8475427', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=eb187f09-9e48-4b8c-9111-744004bcec05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.948 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:be:93 10.100.0.8'], port_security=['fa:16:3e:bf:be:93 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24911a34-643e-4f3c-a875-90bab1df8aa9, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=082c8475-1c86-43c9-b68c-dafbd502311e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.949 143400 INFO neutron.agent.ovn.metadata.agent [-] Port eb187f09-9e48-4b8c-9111-744004bcec05 in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 bound to our chassis#033[00m
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.950 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69989adc-a022-484c-921c-4ddb36b3b0a9#033[00m
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9578] device (tap580c06c8-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9589] device (tap082c8475-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9612] manager: (tapd87e0ecc-10): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9616] device (tap580c06c8-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9620] device (tap082c8475-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.963 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e53a571f-a06d-4851-b288-e46471055a95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.964 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69989adc-a1 in ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9665] device (tapeb187f09-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9672] device (tapeb187f09-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.966 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69989adc-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.966 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d57e66-de5f-4ee5-bb18-c55fb93713c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.968 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[313d1e70-6d6f-456f-bca9-de9c41ce79e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9786] manager: (tap341d2ffb-54): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Nov 29 03:17:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:08.985 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[009c0c34-f165-4cd5-943f-bc6a04479029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:08 np0005539552 NetworkManager[48926]: <info>  [1764404228.9978] manager: (tap120af4a6-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0116] manager: (tapb25ceb20-79): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.011 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[91a0d55d-b5df-4936-b63b-b2443ccdcdaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.039 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[909a06b3-629e-4b3c-8195-67459c3476a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 systemd-machined[196379]: New machine qemu-43-instance-00000066.
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0479] manager: (tap69989adc-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.047 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[61ccbd66-8648-4b1e-88c4-21bf79af8aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 systemd[1]: Started Virtual Machine qemu-43-instance-00000066.
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.064 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00402|binding|INFO|Claiming lport 580c06c8-3541-4c20-b8a7-c02c5f9efe4b for this chassis.
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00403|binding|INFO|580c06c8-3541-4c20-b8a7-c02c5f9efe4b: Claiming fa:16:3e:1a:e2:8b 10.1.1.28
Nov 29 03:17:09 np0005539552 kernel: tapd87e0ecc-10: entered promiscuous mode
Nov 29 03:17:09 np0005539552 kernel: tapb25ceb20-79: entered promiscuous mode
Nov 29 03:17:09 np0005539552 kernel: tap341d2ffb-54: entered promiscuous mode
Nov 29 03:17:09 np0005539552 kernel: tap120af4a6-f9: entered promiscuous mode
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0709] device (tapd87e0ecc-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.070 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:e2:8b 10.1.1.28'], port_security=['fa:16:3e:1a:e2:8b 10.1.1.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-164481673', 'neutron:cidrs': '10.1.1.28/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-164481673', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4a7ebb69-572b-44c7-b6ca-83fcf8475427', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=580c06c8-3541-4c20-b8a7-c02c5f9efe4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0720] device (tapb25ceb20-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0730] device (tap341d2ffb-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0736] device (tap120af4a6-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0743] device (tapd87e0ecc-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0747] device (tapb25ceb20-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0752] device (tap341d2ffb-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.0754] device (tap120af4a6-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00404|binding|INFO|Claiming lport 120af4a6-f9dd-4b7a-8756-ec894c6253de for this chassis.
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00405|binding|INFO|120af4a6-f9dd-4b7a-8756-ec894c6253de: Claiming fa:16:3e:56:50:8c 10.2.2.100
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00406|binding|INFO|Claiming lport d87e0ecc-102f-4db6-b802-352374722987 for this chassis.
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00407|binding|INFO|d87e0ecc-102f-4db6-b802-352374722987: Claiming fa:16:3e:dd:61:ae 10.1.1.203
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00408|binding|INFO|Claiming lport b25ceb20-79ac-43b0-8487-65bcf31a0a2f for this chassis.
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00409|binding|INFO|b25ceb20-79ac-43b0-8487-65bcf31a0a2f: Claiming fa:16:3e:42:2b:da 10.2.2.200
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00410|binding|INFO|Claiming lport 341d2ffb-54ae-4f73-b5fa-028f5d68084c for this chassis.
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00411|binding|INFO|341d2ffb-54ae-4f73-b5fa-028f5d68084c: Claiming fa:16:3e:0a:5b:43 10.1.1.209
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.083 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:50:8c 10.2.2.100'], port_security=['fa:16:3e:56:50:8c 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22531800-2588-4563-9214-766267df7c54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d562832a-41c1-42fd-9b76-af9eebcd14e7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=120af4a6-f9dd-4b7a-8756-ec894c6253de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.085 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:2b:da 10.2.2.200'], port_security=['fa:16:3e:42:2b:da 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22531800-2588-4563-9214-766267df7c54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d562832a-41c1-42fd-9b76-af9eebcd14e7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=b25ceb20-79ac-43b0-8487-65bcf31a0a2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.087 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:5b:43 10.1.1.209'], port_security=['fa:16:3e:0a:5b:43 10.1.1.209'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.209/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=341d2ffb-54ae-4f73-b5fa-028f5d68084c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.088 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:61:ae 10.1.1.203'], port_security=['fa:16:3e:dd:61:ae 10.1.1.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.203/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d87e0ecc-102f-4db6-b802-352374722987) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.084 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[512e0020-2331-4bf0-97be-741e004caae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.093 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[aef0f705-5cfd-403e-925e-9b2f3e7836bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00412|binding|INFO|Setting lport eb187f09-9e48-4b8c-9111-744004bcec05 ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00413|binding|INFO|Setting lport eb187f09-9e48-4b8c-9111-744004bcec05 up in Southbound
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00414|binding|INFO|Setting lport 082c8475-1c86-43c9-b68c-dafbd502311e ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00415|binding|INFO|Setting lport 082c8475-1c86-43c9-b68c-dafbd502311e up in Southbound
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.095 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.1160] device (tap69989adc-a0): carrier: link connected
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.121 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e837ea00-10bd-4d28-8fb9-525e1e83c57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.137 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[83db656f-47b8-42f2-98cf-9d47582a577d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69989adc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:95:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727480, 'reachable_time': 19624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276076, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.151 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f4621ac8-e30b-41eb-a08e-8d05250c806a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:952a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727480, 'tstamp': 727480}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276077, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.166 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2aa950-0d21-4fdb-8811-0c39bf8a6472]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69989adc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:95:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727480, 'reachable_time': 19624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276079, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00416|binding|INFO|Setting lport 341d2ffb-54ae-4f73-b5fa-028f5d68084c ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00417|binding|INFO|Setting lport 341d2ffb-54ae-4f73-b5fa-028f5d68084c up in Southbound
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00418|binding|INFO|Setting lport 580c06c8-3541-4c20-b8a7-c02c5f9efe4b ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00419|binding|INFO|Setting lport 580c06c8-3541-4c20-b8a7-c02c5f9efe4b up in Southbound
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00420|binding|INFO|Setting lport b25ceb20-79ac-43b0-8487-65bcf31a0a2f ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00421|binding|INFO|Setting lport b25ceb20-79ac-43b0-8487-65bcf31a0a2f up in Southbound
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00422|binding|INFO|Setting lport 120af4a6-f9dd-4b7a-8756-ec894c6253de ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00423|binding|INFO|Setting lport 120af4a6-f9dd-4b7a-8756-ec894c6253de up in Southbound
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00424|binding|INFO|Setting lport d87e0ecc-102f-4db6-b802-352374722987 ovn-installed in OVS
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00425|binding|INFO|Setting lport d87e0ecc-102f-4db6-b802-352374722987 up in Southbound
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.191 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a2296389-1e60-4559-a72f-75f1d604003f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.192 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.245 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[03232ffe-a77e-4e6d-9d3b-58e1b2953a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.247 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69989adc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.247 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.247 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69989adc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.2498] manager: (tap69989adc-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 29 03:17:09 np0005539552 kernel: tap69989adc-a0: entered promiscuous mode
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.249 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.252 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69989adc-a0, col_values=(('external_ids', {'iface-id': 'd594d0ee-2dd3-4c06-b632-72a747d80deb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:09Z|00426|binding|INFO|Releasing lport d594d0ee-2dd3-4c06-b632-72a747d80deb from this chassis (sb_readonly=0)
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.253 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.268 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.270 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69989adc-a022-484c-921c-4ddb36b3b0a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69989adc-a022-484c-921c-4ddb36b3b0a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.271 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dda6ab32-1bfd-4b36-bb2e-9c03c3036fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.272 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-69989adc-a022-484c-921c-4ddb36b3b0a9
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/69989adc-a022-484c-921c-4ddb36b3b0a9.pid.haproxy
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 69989adc-a022-484c-921c-4ddb36b3b0a9
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.274 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'env', 'PROCESS_TAG=haproxy-69989adc-a022-484c-921c-4ddb36b3b0a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69989adc-a022-484c-921c-4ddb36b3b0a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.637 233728 DEBUG nova.compute.manager [req-be25d2ed-9f55-4b5f-8106-2d3ff2e27a8f req-b4bed478-7b8e-472a-8293-3a58fc30e364 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.638 233728 DEBUG oslo_concurrency.lockutils [req-be25d2ed-9f55-4b5f-8106-2d3ff2e27a8f req-b4bed478-7b8e-472a-8293-3a58fc30e364 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.638 233728 DEBUG oslo_concurrency.lockutils [req-be25d2ed-9f55-4b5f-8106-2d3ff2e27a8f req-b4bed478-7b8e-472a-8293-3a58fc30e364 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.638 233728 DEBUG oslo_concurrency.lockutils [req-be25d2ed-9f55-4b5f-8106-2d3ff2e27a8f req-b4bed478-7b8e-472a-8293-3a58fc30e364 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.638 233728 DEBUG nova.compute.manager [req-be25d2ed-9f55-4b5f-8106-2d3ff2e27a8f req-b4bed478-7b8e-472a-8293-3a58fc30e364 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:09 np0005539552 podman[276111]: 2025-11-29 08:17:09.66303001 +0000 UTC m=+0.084532482 container create 870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.674 233728 DEBUG nova.compute.manager [req-8d600186-5892-4512-87d2-bc5b01137a6e req-18b2955a-6789-45a3-a72e-ab597ec411c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.675 233728 DEBUG oslo_concurrency.lockutils [req-8d600186-5892-4512-87d2-bc5b01137a6e req-18b2955a-6789-45a3-a72e-ab597ec411c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.675 233728 DEBUG oslo_concurrency.lockutils [req-8d600186-5892-4512-87d2-bc5b01137a6e req-18b2955a-6789-45a3-a72e-ab597ec411c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.675 233728 DEBUG oslo_concurrency.lockutils [req-8d600186-5892-4512-87d2-bc5b01137a6e req-18b2955a-6789-45a3-a72e-ab597ec411c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:09 np0005539552 nova_compute[233724]: 2025-11-29 08:17:09.675 233728 DEBUG nova.compute.manager [req-8d600186-5892-4512-87d2-bc5b01137a6e req-18b2955a-6789-45a3-a72e-ab597ec411c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:09 np0005539552 podman[276111]: 2025-11-29 08:17:09.601850516 +0000 UTC m=+0.023353018 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:09 np0005539552 systemd[1]: Started libpod-conmon-870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c.scope.
Nov 29 03:17:09 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:17:09 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d98abae8fdccedb042a653205b031ab42c7661223bbc77a35eefda08c68592a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:09 np0005539552 podman[276111]: 2025-11-29 08:17:09.750776297 +0000 UTC m=+0.172278789 container init 870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:17:09 np0005539552 podman[276111]: 2025-11-29 08:17:09.75644976 +0000 UTC m=+0.177952232 container start 870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:17:09 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [NOTICE]   (276129) : New worker (276131) forked
Nov 29 03:17:09 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [NOTICE]   (276129) : Loading success.
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.820 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 082c8475-1c86-43c9-b68c-dafbd502311e in datapath 9cbd3709-4a58-47d0-b193-4d753a5463a6 unbound from our chassis#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.823 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cbd3709-4a58-47d0-b193-4d753a5463a6#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.834 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c2476ba2-07f2-486d-a1f6-1bc6714d6cc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.835 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9cbd3709-41 in ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.836 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9cbd3709-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.836 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[01eecff9-6b95-4912-8e35-1a76f3640b01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.837 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[22c4d55e-f7c6-4634-a536-eb1521430022]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.848 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e570a0a5-1245-480e-b41d-432e89cc6aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:09.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac50f756-489b-4bef-8a1d-b1976effe194]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.903 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[60b08041-ca94-4d9e-8782-73e0f87f0df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.913 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c59109-4086-4f0e-a31a-ce5fae8ed699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.9141] manager: (tap9cbd3709-40): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Nov 29 03:17:09 np0005539552 systemd-udevd[276056]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.950 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0742c8d4-f22d-44ff-8a0e-5da16321a26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.954 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3fca5a85-16f3-42b8-9b85-cddbe772ce09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:09 np0005539552 NetworkManager[48926]: <info>  [1764404229.9801] device (tap9cbd3709-40): carrier: link connected
Nov 29 03:17:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:09.987 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d42168a3-7a40-4d25-9fdc-57fc75dd3e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.002 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[526f196e-034e-41ef-ba7a-63b63e3c348a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cbd3709-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:ed:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727566, 'reachable_time': 15631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276150, 'error': None, 'target': 'ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.010 233728 DEBUG nova.compute.manager [req-8313795b-d506-475c-9cd9-1c79036460a6 req-e6d7140b-98d4-4281-82d9-a16f6a655827 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.010 233728 DEBUG oslo_concurrency.lockutils [req-8313795b-d506-475c-9cd9-1c79036460a6 req-e6d7140b-98d4-4281-82d9-a16f6a655827 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.011 233728 DEBUG oslo_concurrency.lockutils [req-8313795b-d506-475c-9cd9-1c79036460a6 req-e6d7140b-98d4-4281-82d9-a16f6a655827 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.011 233728 DEBUG oslo_concurrency.lockutils [req-8313795b-d506-475c-9cd9-1c79036460a6 req-e6d7140b-98d4-4281-82d9-a16f6a655827 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.011 233728 DEBUG nova.compute.manager [req-8313795b-d506-475c-9cd9-1c79036460a6 req-e6d7140b-98d4-4281-82d9-a16f6a655827 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.022 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4750e6-7630-49a1-86bf-08cba7aaa20c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:ed7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727566, 'tstamp': 727566}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276158, 'error': None, 'target': 'ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.040 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6187486c-7630-40b2-8fed-4fbf860c9528]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cbd3709-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:ed:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727566, 'reachable_time': 15631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276169, 'error': None, 'target': 'ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.071 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc32930-f725-4e6e-9646-37232034b326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.126 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b1e0cf-d167-4ee7-bf0c-45535b1a202d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.128 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cbd3709-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.128 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.129 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cbd3709-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:10 np0005539552 NetworkManager[48926]: <info>  [1764404230.1312] manager: (tap9cbd3709-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Nov 29 03:17:10 np0005539552 kernel: tap9cbd3709-40: entered promiscuous mode
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.131 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.134 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cbd3709-40, col_values=(('external_ids', {'iface-id': 'e3216152-714e-4f1b-a915-1a42856d0b72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.135 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:10Z|00427|binding|INFO|Releasing lport e3216152-714e-4f1b-a915-1a42856d0b72 from this chassis (sb_readonly=0)
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.153 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9cbd3709-4a58-47d0-b193-4d753a5463a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9cbd3709-4a58-47d0-b193-4d753a5463a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.154 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0705c4fd-bd0a-46db-8dab-56ce22dd626c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.154 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-9cbd3709-4a58-47d0-b193-4d753a5463a6
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/9cbd3709-4a58-47d0-b193-4d753a5463a6.pid.haproxy
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 9cbd3709-4a58-47d0-b193-4d753a5463a6
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.155 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'env', 'PROCESS_TAG=haproxy-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9cbd3709-4a58-47d0-b193-4d753a5463a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.326 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404230.326012, ad6070e8-74bc-4df7-9c2d-5da5da175238 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.327 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.375 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.380 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404230.3286285, ad6070e8-74bc-4df7-9c2d-5da5da175238 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.380 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.421 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.424 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.454 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:10 np0005539552 podman[276267]: 2025-11-29 08:17:10.525459971 +0000 UTC m=+0.043890631 container create 8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:17:10 np0005539552 systemd[1]: Started libpod-conmon-8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832.scope.
Nov 29 03:17:10 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:17:10 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa4b1aa92590b50fd6581823e75dbe61f786a9991f1fa810e3ece2f8a9035bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:10 np0005539552 podman[276267]: 2025-11-29 08:17:10.503172502 +0000 UTC m=+0.021603172 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:10 np0005539552 podman[276267]: 2025-11-29 08:17:10.601175825 +0000 UTC m=+0.119606505 container init 8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:17:10 np0005539552 podman[276267]: 2025-11-29 08:17:10.606258151 +0000 UTC m=+0.124688811 container start 8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:17:10 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [NOTICE]   (276286) : New worker (276288) forked
Nov 29 03:17:10 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [NOTICE]   (276286) : Loading success.
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.660 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 580c06c8-3541-4c20-b8a7-c02c5f9efe4b in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.662 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69989adc-a022-484c-921c-4ddb36b3b0a9#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.676 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2d68a3c2-3a15-462d-9fa9-2f19b245536a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.707 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8bdd49-e2e8-4d53-802d-c54d5330fa15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.710 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb2dfd0-105a-4964-bc65-adf33a48e73b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.738 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[58a5e345-32a5-4cad-8b59-c16c1c0696e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.757 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb66493-8b18-445e-b77d-3fae7bee8f42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69989adc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:95:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727480, 'reachable_time': 19624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276303, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.774 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c41804-4d68-4bb4-b156-8198625ccf03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69989adc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727490, 'tstamp': 727490}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276304, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap69989adc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727492, 'tstamp': 727492}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276304, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.775 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69989adc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.777 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539552 nova_compute[233724]: 2025-11-29 08:17:10.778 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.778 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69989adc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.778 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.779 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69989adc-a0, col_values=(('external_ids', {'iface-id': 'd594d0ee-2dd3-4c06-b632-72a747d80deb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.779 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.780 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 120af4a6-f9dd-4b7a-8756-ec894c6253de in datapath 22531800-2588-4563-9214-766267df7c54 unbound from our chassis#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.781 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22531800-2588-4563-9214-766267df7c54#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.791 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[912d6fb4-3ccd-4498-bdd2-29acbd75df19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.792 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22531800-21 in ovnmeta-22531800-2588-4563-9214-766267df7c54 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.793 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22531800-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.794 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[25693401-de6d-40c3-9b17-6037d6d63a65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.795 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f1afca-1164-46d5-9b30-1293f85071f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.806 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[4f91a96a-c71f-494f-9c6f-25aa0bc25e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.829 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4eeb6241-593d-4fb1-b9db-04ac3ca7ef72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.855 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b32b5eda-8c94-40f8-bbe7-6674438d4b06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 NetworkManager[48926]: <info>  [1764404230.8623] manager: (tap22531800-20): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.863 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[954f900f-e4c4-4fdf-a314-bfa884bdefc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.891 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[aada3a46-17e5-41d3-8e27-19bcdc6602a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.893 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d05675ce-620f-4540-941a-c65448462bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 NetworkManager[48926]: <info>  [1764404230.9123] device (tap22531800-20): carrier: link connected
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.918 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab8de3f-4666-48be-b9a1-96fe54f64490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.934 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[60d863f6-f4bc-4c5e-bf88-bd5f83755659]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22531800-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:d1:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727660, 'reachable_time': 26555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276315, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.953 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8524359f-9e12-40cd-b764-309794e19465]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:d171'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727660, 'tstamp': 727660}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276316, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:10.972 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[76f40e0a-3492-40af-a295-d674f252ed76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22531800-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:d1:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727660, 'reachable_time': 26555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276317, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.001 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a8499bd6-59dc-48a9-882b-88ad6f5e59da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.061 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[512db433-396b-4cd0-a8b5-b6e86dd7f824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.063 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22531800-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.063 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.063 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22531800-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 NetworkManager[48926]: <info>  [1764404231.0807] manager: (tap22531800-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.080 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 kernel: tap22531800-20: entered promiscuous mode
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.082 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.082 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22531800-20, col_values=(('external_ids', {'iface-id': '92ed063a-574d-4b3d-9fe4-dcebddd764ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.083 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:11Z|00428|binding|INFO|Releasing lport 92ed063a-574d-4b3d-9fe4-dcebddd764ca from this chassis (sb_readonly=0)
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.099 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.100 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22531800-2588-4563-9214-766267df7c54.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22531800-2588-4563-9214-766267df7c54.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.101 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8c2f7c-5682-400f-b38c-ac9e81c4d155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.102 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-22531800-2588-4563-9214-766267df7c54
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/22531800-2588-4563-9214-766267df7c54.pid.haproxy
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 22531800-2588-4563-9214-766267df7c54
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.102 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'env', 'PROCESS_TAG=haproxy-22531800-2588-4563-9214-766267df7c54', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22531800-2588-4563-9214-766267df7c54.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.128 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:11 np0005539552 podman[276347]: 2025-11-29 08:17:11.443037012 +0000 UTC m=+0.046088009 container create 418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:17:11 np0005539552 systemd[1]: Started libpod-conmon-418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55.scope.
Nov 29 03:17:11 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:17:11 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85509d81a220e6ca70ce8b1f63fd113458b3128f50a35d0a2a539cba8cf6f4fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:11 np0005539552 podman[276347]: 2025-11-29 08:17:11.512045956 +0000 UTC m=+0.115096993 container init 418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:17:11 np0005539552 podman[276347]: 2025-11-29 08:17:11.419826878 +0000 UTC m=+0.022877895 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:11 np0005539552 podman[276347]: 2025-11-29 08:17:11.516925547 +0000 UTC m=+0.119976554 container start 418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:11 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [NOTICE]   (276366) : New worker (276368) forked
Nov 29 03:17:11 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [NOTICE]   (276366) : Loading success.
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.551 233728 DEBUG nova.network.neutron [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updated VIF entry in instance network info cache for port b25ceb20-79ac-43b0-8487-65bcf31a0a2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.552 233728 DEBUG nova.network.neutron [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.564 143400 INFO neutron.agent.ovn.metadata.agent [-] Port b25ceb20-79ac-43b0-8487-65bcf31a0a2f in datapath 22531800-2588-4563-9214-766267df7c54 unbound from our chassis#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.566 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22531800-2588-4563-9214-766267df7c54#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.572 233728 DEBUG oslo_concurrency.lockutils [req-d7a39628-fd12-404b-a341-1e671c2ece17 req-66f9990a-7ab0-416d-90b7-b5a90d332839 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.580 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb8717d-29c9-46a5-af9f-eb16c8c4e350]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.608 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[24faab30-35f0-4c10-87b5-cc6c3c3ffbe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.610 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[26792d8d-6ab0-4292-9684-93275caa2912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.624 233728 DEBUG nova.compute.manager [req-585fec19-eb3a-491f-913b-70e1bc22be73 req-09dd5035-0279-4a8a-889f-6ec85d2426c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-580c06c8-3541-4c20-b8a7-c02c5f9efe4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.624 233728 DEBUG oslo_concurrency.lockutils [req-585fec19-eb3a-491f-913b-70e1bc22be73 req-09dd5035-0279-4a8a-889f-6ec85d2426c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.625 233728 DEBUG oslo_concurrency.lockutils [req-585fec19-eb3a-491f-913b-70e1bc22be73 req-09dd5035-0279-4a8a-889f-6ec85d2426c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.625 233728 DEBUG oslo_concurrency.lockutils [req-585fec19-eb3a-491f-913b-70e1bc22be73 req-09dd5035-0279-4a8a-889f-6ec85d2426c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.625 233728 DEBUG nova.compute.manager [req-585fec19-eb3a-491f-913b-70e1bc22be73 req-09dd5035-0279-4a8a-889f-6ec85d2426c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-580c06c8-3541-4c20-b8a7-c02c5f9efe4b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.636 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfb3d31-23da-4843-9896-045b79cbdf67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.652 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f2204f-1ee3-4c16-926f-8aacb3d3eaa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22531800-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:d1:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 180, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 180, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727660, 'reachable_time': 26555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276382, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.665 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9055699c-2dab-4d98-a541-86c7b311138b]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap22531800-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727671, 'tstamp': 727671}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276383, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap22531800-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727674, 'tstamp': 727674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276383, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.666 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22531800-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.668 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.669 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.669 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22531800-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.669 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.670 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22531800-20, col_values=(('external_ids', {'iface-id': '92ed063a-574d-4b3d-9fe4-dcebddd764ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.670 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.671 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 341d2ffb-54ae-4f73-b5fa-028f5d68084c in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.673 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69989adc-a022-484c-921c-4ddb36b3b0a9#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.686 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbfdee2-0e43-4cd7-a635-db12a5e214f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.714 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5aec1b8a-e613-4e8b-981c-66d8bbf6abf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.716 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4dd156-be1d-41e5-9d46-cd26ec363037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.739 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7b37c4c8-6f1e-481e-9dc6-3ddb9934ff35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.752 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5488cc-9c04-4647-9298-13b42cda8cf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69989adc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:95:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727480, 'reachable_time': 19624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276389, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.763 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[419aedbe-8fad-4228-9a23-0233faa6c17e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69989adc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727490, 'tstamp': 727490}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276390, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap69989adc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727492, 'tstamp': 727492}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276390, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.764 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69989adc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.766 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.767 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.767 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69989adc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.767 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.767 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69989adc-a0, col_values=(('external_ids', {'iface-id': 'd594d0ee-2dd3-4c06-b632-72a747d80deb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.768 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.769 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d87e0ecc-102f-4db6-b802-352374722987 in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.770 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69989adc-a022-484c-921c-4ddb36b3b0a9#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.781 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bbdb75-6bdc-4573-bb7c-1dfd0e29f288]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.808 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d30ce177-004e-4b93-9e44-fe52e8215586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.811 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8282213f-aa0d-4db1-bd5a-0d197ee3b830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.834 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0422e8-587e-499a-89e4-cb4eb5c25f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.849 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d9855d52-8829-48f5-ad2e-01066b57342e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69989adc-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:95:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727480, 'reachable_time': 19624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276396, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.864 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26cbbfd9-e523-44e3-a544-af4103319246]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69989adc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727490, 'tstamp': 727490}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276397, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap69989adc-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727492, 'tstamp': 727492}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276397, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.866 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69989adc-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.867 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.868 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.870 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69989adc-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.870 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.871 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69989adc-a0, col_values=(('external_ids', {'iface-id': 'd594d0ee-2dd3-4c06-b632-72a747d80deb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:11.871 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.938 233728 DEBUG nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.939 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.939 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.940 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.940 233728 DEBUG nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No event matching network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 in dict_keys([('network-vif-plugged', 'd87e0ecc-102f-4db6-b802-352374722987'), ('network-vif-plugged', '341d2ffb-54ae-4f73-b5fa-028f5d68084c'), ('network-vif-plugged', '120af4a6-f9dd-4b7a-8756-ec894c6253de')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.941 233728 WARNING nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.941 233728 DEBUG nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.941 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.942 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.942 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.943 233728 DEBUG nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.943 233728 DEBUG nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.943 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.944 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.944 233728 DEBUG oslo_concurrency.lockutils [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.945 233728 DEBUG nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No event matching network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de in dict_keys([('network-vif-plugged', 'd87e0ecc-102f-4db6-b802-352374722987'), ('network-vif-plugged', '341d2ffb-54ae-4f73-b5fa-028f5d68084c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.945 233728 WARNING nova.compute.manager [req-86ce57e5-3c3b-4834-bbef-6689fa3f0f47 req-69e08bba-32e2-4be0-b516-05c7020cb6cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.948 233728 DEBUG nova.compute.manager [req-849e935f-1ba2-48b3-bfdd-5b577bcc6622 req-d7d153c3-fd8d-4556-be03-90d8c39d4fce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.949 233728 DEBUG oslo_concurrency.lockutils [req-849e935f-1ba2-48b3-bfdd-5b577bcc6622 req-d7d153c3-fd8d-4556-be03-90d8c39d4fce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.949 233728 DEBUG oslo_concurrency.lockutils [req-849e935f-1ba2-48b3-bfdd-5b577bcc6622 req-d7d153c3-fd8d-4556-be03-90d8c39d4fce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.950 233728 DEBUG oslo_concurrency.lockutils [req-849e935f-1ba2-48b3-bfdd-5b577bcc6622 req-d7d153c3-fd8d-4556-be03-90d8c39d4fce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.950 233728 DEBUG nova.compute.manager [req-849e935f-1ba2-48b3-bfdd-5b577bcc6622 req-d7d153c3-fd8d-4556-be03-90d8c39d4fce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No event matching network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e in dict_keys([('network-vif-plugged', 'd87e0ecc-102f-4db6-b802-352374722987'), ('network-vif-plugged', '341d2ffb-54ae-4f73-b5fa-028f5d68084c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:17:11 np0005539552 nova_compute[233724]: 2025-11-29 08:17:11.950 233728 WARNING nova.compute.manager [req-849e935f-1ba2-48b3-bfdd-5b577bcc6622 req-d7d153c3-fd8d-4556-be03-90d8c39d4fce 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.220 233728 DEBUG nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.221 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.221 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.221 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.222 233728 DEBUG nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No event matching network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f in dict_keys([('network-vif-plugged', 'd87e0ecc-102f-4db6-b802-352374722987'), ('network-vif-plugged', '341d2ffb-54ae-4f73-b5fa-028f5d68084c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.222 233728 WARNING nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.222 233728 DEBUG nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.223 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.223 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.223 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.223 233728 DEBUG nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.224 233728 DEBUG nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.224 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.224 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.224 233728 DEBUG oslo_concurrency.lockutils [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.225 233728 DEBUG nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No event matching network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 in dict_keys([('network-vif-plugged', '341d2ffb-54ae-4f73-b5fa-028f5d68084c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.225 233728 WARNING nova.compute.manager [req-e0e8d3ce-267b-4d5a-a519-c6789bb4480c req-ef79f076-19f8-4451-a667-d471e1b4a6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:12 np0005539552 nova_compute[233724]: 2025-11-29 08:17:12.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.816 233728 DEBUG nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-580c06c8-3541-4c20-b8a7-c02c5f9efe4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.822 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.823 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.823 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.823 233728 DEBUG nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No event matching network-vif-plugged-580c06c8-3541-4c20-b8a7-c02c5f9efe4b in dict_keys([('network-vif-plugged', '341d2ffb-54ae-4f73-b5fa-028f5d68084c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.823 233728 WARNING nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-580c06c8-3541-4c20-b8a7-c02c5f9efe4b for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.824 233728 DEBUG nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.824 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.824 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.824 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.824 233728 DEBUG nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Processing event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.824 233728 DEBUG nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.825 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.825 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.825 233728 DEBUG oslo_concurrency.lockutils [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.825 233728 DEBUG nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.825 233728 WARNING nova.compute.manager [req-38e66565-a58b-428a-8b09-de5c9829bf14 req-5b41b819-f101-42a2-8857-5187d36fa705 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.826 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.831 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404233.8309636, ad6070e8-74bc-4df7-9c2d-5da5da175238 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.831 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.833 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.836 233728 INFO nova.virt.libvirt.driver [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance spawned successfully.#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.836 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.867 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.874 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.878 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.878 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.879 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.879 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.879 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.880 233728 DEBUG nova.virt.libvirt.driver [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:13 np0005539552 nova_compute[233724]: 2025-11-29 08:17:13.936 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:14 np0005539552 nova_compute[233724]: 2025-11-29 08:17:14.026 233728 INFO nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Took 38.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:14 np0005539552 nova_compute[233724]: 2025-11-29 08:17:14.026 233728 DEBUG nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:14 np0005539552 nova_compute[233724]: 2025-11-29 08:17:14.109 233728 INFO nova.compute.manager [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Took 45.05 seconds to build instance.#033[00m
Nov 29 03:17:14 np0005539552 nova_compute[233724]: 2025-11-29 08:17:14.138 233728 DEBUG oslo_concurrency.lockutils [None req-56662c67-b11d-45e4-adbc-1b6d6b8beeb0 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 45.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.021320278 +0000 UTC m=+0.044007213 container create a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:17:15 np0005539552 systemd[1]: Started libpod-conmon-a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f.scope.
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.004252979 +0000 UTC m=+0.026939944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:15 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.138824435 +0000 UTC m=+0.161511390 container init a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.146794449 +0000 UTC m=+0.169481374 container start a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:17:15 np0005539552 tender_chatterjee[276807]: 167 167
Nov 29 03:17:15 np0005539552 systemd[1]: libpod-a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f.scope: Deactivated successfully.
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.161792792 +0000 UTC m=+0.184479727 container attach a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.162681846 +0000 UTC m=+0.185368791 container died a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:15 np0005539552 systemd[1]: var-lib-containers-storage-overlay-3a6b27d9acff91656341a1f4fadc16035d27ff4e848138bc02fc9d87915b7631-merged.mount: Deactivated successfully.
Nov 29 03:17:15 np0005539552 podman[276791]: 2025-11-29 08:17:15.272548308 +0000 UTC m=+0.295235263 container remove a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 03:17:15 np0005539552 systemd[1]: libpod-conmon-a8457d45d803cc0011ae4a02942cd02e72209ee5ed150991251f289d12cb3d0f.scope: Deactivated successfully.
Nov 29 03:17:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:15 np0005539552 podman[276831]: 2025-11-29 08:17:15.508704122 +0000 UTC m=+0.042087841 container create fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_almeida, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:15 np0005539552 systemd[1]: Started libpod-conmon-fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847.scope.
Nov 29 03:17:15 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:17:15 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9546fabfe4a819c82b27a3a2bac345a9186ee376fa63345b5f64306837c2dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:15 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9546fabfe4a819c82b27a3a2bac345a9186ee376fa63345b5f64306837c2dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:15 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9546fabfe4a819c82b27a3a2bac345a9186ee376fa63345b5f64306837c2dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:15 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9546fabfe4a819c82b27a3a2bac345a9186ee376fa63345b5f64306837c2dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:15 np0005539552 podman[276831]: 2025-11-29 08:17:15.580752108 +0000 UTC m=+0.114135837 container init fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_almeida, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:17:15 np0005539552 podman[276831]: 2025-11-29 08:17:15.490399461 +0000 UTC m=+0.023783200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:15 np0005539552 podman[276831]: 2025-11-29 08:17:15.588594449 +0000 UTC m=+0.121978168 container start fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_almeida, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:17:15 np0005539552 podman[276831]: 2025-11-29 08:17:15.591728123 +0000 UTC m=+0.125111842 container attach fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_almeida, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:17:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:16 np0005539552 nova_compute[233724]: 2025-11-29 08:17:16.132 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]: [
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:    {
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "available": false,
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "ceph_device": false,
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "lsm_data": {},
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "lvs": [],
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "path": "/dev/sr0",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "rejected_reasons": [
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "Insufficient space (<5GB)",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "Has a FileSystem"
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        ],
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        "sys_api": {
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "actuators": null,
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "device_nodes": "sr0",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "devname": "sr0",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "human_readable_size": "482.00 KB",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "id_bus": "ata",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "model": "QEMU DVD-ROM",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "nr_requests": "2",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "parent": "/dev/sr0",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "partitions": {},
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "path": "/dev/sr0",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "removable": "1",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "rev": "2.5+",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "ro": "0",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "rotational": "1",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "sas_address": "",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "sas_device_handle": "",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "scheduler_mode": "mq-deadline",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "sectors": 0,
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "sectorsize": "2048",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "size": 493568.0,
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "support_discard": "2048",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "type": "disk",
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:            "vendor": "QEMU"
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:        }
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]:    }
Nov 29 03:17:16 np0005539552 affectionate_almeida[276848]: ]
Nov 29 03:17:16 np0005539552 systemd[1]: libpod-fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847.scope: Deactivated successfully.
Nov 29 03:17:16 np0005539552 systemd[1]: libpod-fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847.scope: Consumed 1.227s CPU time.
Nov 29 03:17:16 np0005539552 podman[276831]: 2025-11-29 08:17:16.827159035 +0000 UTC m=+1.360542754 container died fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:17:16 np0005539552 systemd[1]: var-lib-containers-storage-overlay-af9546fabfe4a819c82b27a3a2bac345a9186ee376fa63345b5f64306837c2dd-merged.mount: Deactivated successfully.
Nov 29 03:17:16 np0005539552 podman[276831]: 2025-11-29 08:17:16.884555877 +0000 UTC m=+1.417939596 container remove fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:16 np0005539552 systemd[1]: libpod-conmon-fe1837beb91338482bc571911a506897532fa4f0028095e9fc480d6251247847.scope: Deactivated successfully.
Nov 29 03:17:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:17 np0005539552 nova_compute[233724]: 2025-11-29 08:17:17.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:17.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:19 np0005539552 NetworkManager[48926]: <info>  [1764404239.7669] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 29 03:17:19 np0005539552 NetworkManager[48926]: <info>  [1764404239.7679] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Nov 29 03:17:19 np0005539552 nova_compute[233724]: 2025-11-29 08:17:19.767 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:19 np0005539552 nova_compute[233724]: 2025-11-29 08:17:19.947 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:19Z|00429|binding|INFO|Releasing lport e3216152-714e-4f1b-a915-1a42856d0b72 from this chassis (sb_readonly=0)
Nov 29 03:17:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:19Z|00430|binding|INFO|Releasing lport 92ed063a-574d-4b3d-9fe4-dcebddd764ca from this chassis (sb_readonly=0)
Nov 29 03:17:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:19Z|00431|binding|INFO|Releasing lport d594d0ee-2dd3-4c06-b632-72a747d80deb from this chassis (sb_readonly=0)
Nov 29 03:17:19 np0005539552 nova_compute[233724]: 2025-11-29 08:17:19.972 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:20 np0005539552 nova_compute[233724]: 2025-11-29 08:17:20.303 233728 DEBUG nova.compute.manager [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-changed-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:20 np0005539552 nova_compute[233724]: 2025-11-29 08:17:20.304 233728 DEBUG nova.compute.manager [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing instance network info cache due to event network-changed-082c8475-1c86-43c9-b68c-dafbd502311e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:20 np0005539552 nova_compute[233724]: 2025-11-29 08:17:20.304 233728 DEBUG oslo_concurrency.lockutils [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:20 np0005539552 nova_compute[233724]: 2025-11-29 08:17:20.304 233728 DEBUG oslo_concurrency.lockutils [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:20 np0005539552 nova_compute[233724]: 2025-11-29 08:17:20.305 233728 DEBUG nova.network.neutron [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Refreshing network info cache for port 082c8475-1c86-43c9-b68c-dafbd502311e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:20.624 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:20.624 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:20.625 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:21 np0005539552 nova_compute[233724]: 2025-11-29 08:17:21.133 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:21.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:22 np0005539552 nova_compute[233724]: 2025-11-29 08:17:22.560 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:22 np0005539552 nova_compute[233724]: 2025-11-29 08:17:22.697 233728 DEBUG nova.network.neutron [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updated VIF entry in instance network info cache for port 082c8475-1c86-43c9-b68c-dafbd502311e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:22 np0005539552 nova_compute[233724]: 2025-11-29 08:17:22.698 233728 DEBUG nova.network.neutron [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:22 np0005539552 nova_compute[233724]: 2025-11-29 08:17:22.783 233728 DEBUG oslo_concurrency.lockutils [req-514066ca-878f-439f-b35c-93e1b7845b3e req-de220483-8d35-42f4-afad-7f7e5e999a66 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.358 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.359 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.384 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.491 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.492 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.504 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.505 233728 INFO nova.compute.claims [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:17:23 np0005539552 nova_compute[233724]: 2025-11-29 08:17:23.634 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3237146854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.165 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.171 233728 DEBUG nova.compute.provider_tree [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.185 233728 DEBUG nova.scheduler.client.report [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.223 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.224 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.284 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.284 233728 DEBUG nova.network.neutron [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.308 233728 INFO nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.361 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.473 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.474 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.475 233728 INFO nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Creating image(s)#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.500 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.527 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.553 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.557 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.625 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.626 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.626 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.627 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.657 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.662 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.918 233728 DEBUG nova.policy [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '80ceb9112b3a4f119c05f21fd617af11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '26e3508b949a4dbf960d7befc8f27869', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:17:24 np0005539552 nova_compute[233724]: 2025-11-29 08:17:24.948 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.016 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] resizing rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.123 233728 DEBUG nova.objects.instance [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.139 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.139 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Ensure instance console log exists: /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.140 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.140 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.140 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:25.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.732 233728 DEBUG nova.network.neutron [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Successfully created port: 524180cf-279c-48d6-8bf1-04f8f159aef6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:17:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.953 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.955 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:17:25 np0005539552 nova_compute[233724]: 2025-11-29 08:17:25.956 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.137 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3926014525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.458 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.719 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.721 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.721 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.722 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.925 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.926 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4247MB free_disk=20.848217010498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.927 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.927 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.995 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance ad6070e8-74bc-4df7-9c2d-5da5da175238 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.997 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 258dfc76-0ea9-4521-a3fc-5d64b3632451 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.997 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:17:26 np0005539552 nova_compute[233724]: 2025-11-29 08:17:26.997 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.056 233728 DEBUG nova.network.neutron [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Successfully updated port: 524180cf-279c-48d6-8bf1-04f8f159aef6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.061 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.095 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.096 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.096 233728 DEBUG nova.network.neutron [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.219 233728 DEBUG nova.compute.manager [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.220 233728 DEBUG nova.compute.manager [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing instance network info cache due to event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.220 233728 DEBUG oslo_concurrency.lockutils [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.301 233728 DEBUG nova.network.neutron [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/165246815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.520 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.525 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.548 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.572 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:17:27 np0005539552 nova_compute[233724]: 2025-11-29 08:17:27.572 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:27.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.219 233728 DEBUG nova.network.neutron [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.241 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.241 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance network_info: |[{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.241 233728 DEBUG oslo_concurrency.lockutils [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.242 233728 DEBUG nova.network.neutron [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.245 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start _get_guest_xml network_info=[{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.249 233728 WARNING nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.253 233728 DEBUG nova.virt.libvirt.host [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.254 233728 DEBUG nova.virt.libvirt.host [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.256 233728 DEBUG nova.virt.libvirt.host [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.257 233728 DEBUG nova.virt.libvirt.host [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.258 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.259 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.259 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.259 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.259 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.260 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.260 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.260 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.260 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.261 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.261 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.261 233728 DEBUG nova.virt.hardware [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.264 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:29.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:be:93 10.100.0.8
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:be:93 10.100.0.8
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:61:ae 10.1.1.203
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:61:ae 10.1.1.203
Nov 29 03:17:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2094678072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.737 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.766 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:29 np0005539552 nova_compute[233724]: 2025-11-29 08:17:29.772 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:e2:8b 10.1.1.28
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:e2:8b 10.1.1.28
Nov 29 03:17:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:2b:da 10.2.2.200
Nov 29 03:17:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:29Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:2b:da 10.2.2.200
Nov 29 03:17:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:30Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:50:8c 10.2.2.100
Nov 29 03:17:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:30Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:50:8c 10.2.2.100
Nov 29 03:17:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2280119669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.240 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.243 233728 DEBUG nova.virt.libvirt.vif [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.244 233728 DEBUG nova.network.os_vif_util [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.246 233728 DEBUG nova.network.os_vif_util [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.248 233728 DEBUG nova.objects.instance [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.271 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <uuid>258dfc76-0ea9-4521-a3fc-5d64b3632451</uuid>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <name>instance-0000006c</name>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestJSON-server-1950416616</nova:name>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:17:29</nova:creationTime>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <nova:port uuid="524180cf-279c-48d6-8bf1-04f8f159aef6">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <entry name="serial">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <entry name="uuid">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b8:49:96"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <target dev="tap524180cf-27"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log" append="off"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:17:30 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:17:30 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:17:30 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:17:30 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.273 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Preparing to wait for external event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.273 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.273 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.274 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.275 233728 DEBUG nova.virt.libvirt.vif [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.275 233728 DEBUG nova.network.os_vif_util [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.276 233728 DEBUG nova.network.os_vif_util [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.276 233728 DEBUG os_vif [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.276 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.277 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.277 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.280 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.281 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap524180cf-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.281 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap524180cf-27, col_values=(('external_ids', {'iface-id': '524180cf-279c-48d6-8bf1-04f8f159aef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:49:96', 'vm-uuid': '258dfc76-0ea9-4521-a3fc-5d64b3632451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.283 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:30 np0005539552 NetworkManager[48926]: <info>  [1764404250.2846] manager: (tap524180cf-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.287 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.290 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.292 233728 INFO os_vif [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.344 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.345 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.345 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:b8:49:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.345 233728 INFO nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Using config drive#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.372 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:30Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:5b:43 10.1.1.209
Nov 29 03:17:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:30Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:5b:43 10.1.1.209
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.572 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539552 nova_compute[233724]: 2025-11-29 08:17:30.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539552 podman[278532]: 2025-11-29 08:17:30.985394751 +0000 UTC m=+0.059043117 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:17:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:30Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:ef:da 10.1.1.254
Nov 29 03:17:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:30Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:ef:da 10.1.1.254
Nov 29 03:17:31 np0005539552 podman[278533]: 2025-11-29 08:17:31.009496739 +0000 UTC m=+0.083005892 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:17:31 np0005539552 podman[278531]: 2025-11-29 08:17:31.017790051 +0000 UTC m=+0.092646010 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.137 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.148 233728 INFO nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Creating config drive at /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/disk.config#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.153 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwy9d3vd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.294 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdwy9d3vd" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.319 233728 DEBUG nova.storage.rbd_utils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.322 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/disk.config 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:31.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.491 233728 DEBUG oslo_concurrency.processutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/disk.config 258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.492 233728 INFO nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Deleting local config drive /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:31 np0005539552 kernel: tap524180cf-27: entered promiscuous mode
Nov 29 03:17:31 np0005539552 NetworkManager[48926]: <info>  [1764404251.5524] manager: (tap524180cf-27): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00432|binding|INFO|Claiming lport 524180cf-279c-48d6-8bf1-04f8f159aef6 for this chassis.
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00433|binding|INFO|524180cf-279c-48d6-8bf1-04f8f159aef6: Claiming fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00434|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 ovn-installed in OVS
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 systemd-udevd[278646]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:31 np0005539552 NetworkManager[48926]: <info>  [1764404251.5911] device (tap524180cf-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:31 np0005539552 NetworkManager[48926]: <info>  [1764404251.5917] device (tap524180cf-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00435|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 up in Southbound
Nov 29 03:17:31 np0005539552 systemd-machined[196379]: New machine qemu-44-instance-0000006c.
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.633 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.634 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.636 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00436|binding|INFO|Releasing lport e3216152-714e-4f1b-a915-1a42856d0b72 from this chassis (sb_readonly=0)
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00437|binding|INFO|Releasing lport 92ed063a-574d-4b3d-9fe4-dcebddd764ca from this chassis (sb_readonly=0)
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00438|binding|INFO|Releasing lport d594d0ee-2dd3-4c06-b632-72a747d80deb from this chassis (sb_readonly=0)
Nov 29 03:17:31 np0005539552 systemd[1]: Started Virtual Machine qemu-44-instance-0000006c.
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.647 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[72cfea83-0818-4517-b594-2d4e3409d3c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.648 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.650 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.650 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec665aa-9664-4342-b525-adf10c8e909c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.651 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[05487869-84c0-4eb3-8455-17141130f99b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.663 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9b64a864-6fe0-45d0-86c5-21033db411a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.684 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2a27e6-3403-48a3-a8be-2333f4891b9e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.710 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb336e3-2be0-4026-b889-9b8897ed04f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 systemd-udevd[278648]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:31 np0005539552 NetworkManager[48926]: <info>  [1764404251.7176] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.717 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9d547c-70d1-48a1-a79f-86d3e502722f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.750 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[03deddc2-76a3-4333-9b23-64cbd8c5e8b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.753 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[16fd71e5-107b-4ef0-9f34-a0fe29b6af1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.760 233728 DEBUG nova.network.neutron [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated VIF entry in instance network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.761 233728 DEBUG nova.network.neutron [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:31 np0005539552 NetworkManager[48926]: <info>  [1764404251.7782] device (tap58fd104d-40): carrier: link connected
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.783 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4598c631-f56c-4545-ad91-9e2b66602d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.789 233728 DEBUG oslo_concurrency.lockutils [req-e6e70484-4330-4d5f-8c76-ee124398b13f req-fba9637e-071e-4942-bca1-4ca7e9d5a054 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.799 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[136bd820-ce03-48b0-88e9-50f9ef66ac93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729746, 'reachable_time': 27320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278682, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.812 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[71cd9766-78c5-4b92-bd3c-589aed9871b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 729746, 'tstamp': 729746}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278683, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.830 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8fc0f5-8c63-4100-8e1f-eed5f0d8ffc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729746, 'reachable_time': 27320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278684, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.858 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3c680e-f97f-4112-82ba-fd7b60ab6992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.922 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0a46153c-b817-4881-abf9-b4a2567ef355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.924 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.924 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.924 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.926 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 NetworkManager[48926]: <info>  [1764404251.9271] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Nov 29 03:17:31 np0005539552 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.930 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:31Z|00439|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:17:31 np0005539552 nova_compute[233724]: 2025-11-29 08:17:31.945 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.948 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.949 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[88767891-91d0-4079-8b83-76ed5a7b61db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.949 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:31.950 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.263 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404252.2629848, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.264 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.316 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.322 233728 DEBUG nova.compute.manager [req-0d9a29b2-f4fa-402b-8040-86c81a3a51b5 req-40cac93a-b303-4f2b-8f01-32c26225ae30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.322 233728 DEBUG oslo_concurrency.lockutils [req-0d9a29b2-f4fa-402b-8040-86c81a3a51b5 req-40cac93a-b303-4f2b-8f01-32c26225ae30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.323 233728 DEBUG oslo_concurrency.lockutils [req-0d9a29b2-f4fa-402b-8040-86c81a3a51b5 req-40cac93a-b303-4f2b-8f01-32c26225ae30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.324 233728 DEBUG oslo_concurrency.lockutils [req-0d9a29b2-f4fa-402b-8040-86c81a3a51b5 req-40cac93a-b303-4f2b-8f01-32c26225ae30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.325 233728 DEBUG nova.compute.manager [req-0d9a29b2-f4fa-402b-8040-86c81a3a51b5 req-40cac93a-b303-4f2b-8f01-32c26225ae30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Processing event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.327 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.333 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.335 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.338 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance spawned successfully.#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.338 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.367 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.368 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404252.2632992, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.368 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.373 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.373 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.373 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.374 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.374 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.374 233728 DEBUG nova.virt.libvirt.driver [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.394 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.397 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404252.3311987, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.397 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:32 np0005539552 podman[278758]: 2025-11-29 08:17:32.415516492 +0000 UTC m=+0.104922090 container create f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.432 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.437 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.449 233728 INFO nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.449 233728 DEBUG nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:32 np0005539552 podman[278758]: 2025-11-29 08:17:32.371921611 +0000 UTC m=+0.061327299 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:32 np0005539552 systemd[1]: Started libpod-conmon-f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa.scope.
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.471 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:32 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:17:32 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e41e1f16ba7f430bc3c31e8be8bab7d700d7eb3f19d9b630f0abc1127b3690/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.514 233728 INFO nova.compute.manager [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Took 9.08 seconds to build instance.#033[00m
Nov 29 03:17:32 np0005539552 podman[278758]: 2025-11-29 08:17:32.527588783 +0000 UTC m=+0.216994411 container init f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:17:32 np0005539552 podman[278758]: 2025-11-29 08:17:32.536171964 +0000 UTC m=+0.225577562 container start f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.544 233728 DEBUG oslo_concurrency.lockutils [None req-1fc960b3-d9c7-4285-8886-0d8a94827aa7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:32 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [NOTICE]   (278777) : New worker (278779) forked
Nov 29 03:17:32 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [NOTICE]   (278777) : Loading success.
Nov 29 03:17:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:17:32 np0005539552 nova_compute[233724]: 2025-11-29 08:17:32.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:17:33 np0005539552 nova_compute[233724]: 2025-11-29 08:17:33.136 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:33 np0005539552 nova_compute[233724]: 2025-11-29 08:17:33.137 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:33 np0005539552 nova_compute[233724]: 2025-11-29 08:17:33.137 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:17:33 np0005539552 nova_compute[233724]: 2025-11-29 08:17:33.138 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ad6070e8-74bc-4df7-9c2d-5da5da175238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:33.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:33.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:34 np0005539552 nova_compute[233724]: 2025-11-29 08:17:34.459 233728 DEBUG nova.compute.manager [req-cb5cdfb1-cc9f-4b7a-984c-37c60d44f07b req-4ece806c-3b27-4217-8504-9a18e3fe7c2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:34 np0005539552 nova_compute[233724]: 2025-11-29 08:17:34.460 233728 DEBUG oslo_concurrency.lockutils [req-cb5cdfb1-cc9f-4b7a-984c-37c60d44f07b req-4ece806c-3b27-4217-8504-9a18e3fe7c2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:34 np0005539552 nova_compute[233724]: 2025-11-29 08:17:34.461 233728 DEBUG oslo_concurrency.lockutils [req-cb5cdfb1-cc9f-4b7a-984c-37c60d44f07b req-4ece806c-3b27-4217-8504-9a18e3fe7c2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:34 np0005539552 nova_compute[233724]: 2025-11-29 08:17:34.462 233728 DEBUG oslo_concurrency.lockutils [req-cb5cdfb1-cc9f-4b7a-984c-37c60d44f07b req-4ece806c-3b27-4217-8504-9a18e3fe7c2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:34 np0005539552 nova_compute[233724]: 2025-11-29 08:17:34.462 233728 DEBUG nova.compute.manager [req-cb5cdfb1-cc9f-4b7a-984c-37c60d44f07b req-4ece806c-3b27-4217-8504-9a18e3fe7c2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:34 np0005539552 nova_compute[233724]: 2025-11-29 08:17:34.463 233728 WARNING nova.compute.manager [req-cb5cdfb1-cc9f-4b7a-984c-37c60d44f07b req-4ece806c-3b27-4217-8504-9a18e3fe7c2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.221 233728 DEBUG nova.compute.manager [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.222 233728 DEBUG nova.compute.manager [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing instance network info cache due to event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.223 233728 DEBUG oslo_concurrency.lockutils [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.223 233728 DEBUG oslo_concurrency.lockutils [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.223 233728 DEBUG nova.network.neutron [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.284 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:35.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:35 np0005539552 nova_compute[233724]: 2025-11-29 08:17:35.720 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:36 np0005539552 nova_compute[233724]: 2025-11-29 08:17:36.143 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 03:17:37 np0005539552 nova_compute[233724]: 2025-11-29 08:17:37.025 233728 DEBUG nova.network.neutron [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated VIF entry in instance network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:37 np0005539552 nova_compute[233724]: 2025-11-29 08:17:37.026 233728 DEBUG nova.network.neutron [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:37 np0005539552 nova_compute[233724]: 2025-11-29 08:17:37.048 233728 DEBUG oslo_concurrency.lockutils [req-f11c7371-5e22-4e1c-9cc5-8102467901ac req-034c8bd7-fa38-44aa-9db1-e314a3e830a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 03:17:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:37.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:37.558 143505 DEBUG eventlet.wsgi.server [-] (143505) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:37.559 143505 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: Accept: */*#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: Connection: close#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: Content-Type: text/plain#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: Host: 169.254.169.254#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: User-Agent: curl/7.84.0#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: X-Forwarded-For: 10.100.0.8#015
Nov 29 03:17:37 np0005539552 ovn_metadata_agent[143394]: X-Ovn-Network-Id: 9cbd3709-4a58-47d0-b193-4d753a5463a6 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:17:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:38.373 143505 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:17:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:38.374 143505 INFO eventlet.wsgi.server [-] 10.100.0.8,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 0.8147895#033[00m
Nov 29 03:17:38 np0005539552 haproxy-metadata-proxy-9cbd3709-4a58-47d0-b193-4d753a5463a6[276288]: 10.100.0.8:53392 [29/Nov/2025:08:17:37.557] listener listener/metadata 0/0/0/817/817 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Nov 29 03:17:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 03:17:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:17:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4076246319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:17:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:17:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4076246319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:17:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:39.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.465 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.465 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.466 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.466 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.467 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.468 233728 INFO nova.compute.manager [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Terminating instance#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.470 233728 DEBUG nova.compute.manager [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:39 np0005539552 kernel: tap082c8475-1c (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.5479] device (tap082c8475-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.562 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00440|binding|INFO|Releasing lport 082c8475-1c86-43c9-b68c-dafbd502311e from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00441|binding|INFO|Setting lport 082c8475-1c86-43c9-b68c-dafbd502311e down in Southbound
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00442|binding|INFO|Removing iface tap082c8475-1c ovn-installed in OVS
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.565 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 kernel: tapeb187f09-9e (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.572 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:be:93 10.100.0.8'], port_security=['fa:16:3e:bf:be:93 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24911a34-643e-4f3c-a875-90bab1df8aa9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=082c8475-1c86-43c9-b68c-dafbd502311e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.573 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 082c8475-1c86-43c9-b68c-dafbd502311e in datapath 9cbd3709-4a58-47d0-b193-4d753a5463a6 unbound from our chassis#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.575 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9cbd3709-4a58-47d0-b193-4d753a5463a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.5776] device (tapeb187f09-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.578 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b222b177-4d72-4def-b1a5-9f2d4700407c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.579 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6 namespace which is not needed anymore#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.588 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 kernel: tap580c06c8-35 (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.5985] device (tap580c06c8-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00443|binding|INFO|Releasing lport eb187f09-9e48-4b8c-9111-744004bcec05 from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00444|binding|INFO|Setting lport eb187f09-9e48-4b8c-9111-744004bcec05 down in Southbound
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.622 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00445|binding|INFO|Removing iface tapeb187f09-9e ovn-installed in OVS
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.624 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 kernel: tapd87e0ecc-10 (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.637 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:ef:da 10.1.1.254'], port_security=['fa:16:3e:b0:ef:da 10.1.1.254'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-58017298', 'neutron:cidrs': '10.1.1.254/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-58017298', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4a7ebb69-572b-44c7-b6ca-83fcf8475427', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=eb187f09-9e48-4b8c-9111-744004bcec05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.6388] device (tapd87e0ecc-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00446|binding|INFO|Releasing lport 580c06c8-3541-4c20-b8a7-c02c5f9efe4b from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00447|binding|INFO|Setting lport 580c06c8-3541-4c20-b8a7-c02c5f9efe4b down in Southbound
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00448|binding|INFO|Removing iface tap580c06c8-35 ovn-installed in OVS
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.644 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.645 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.651 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:e2:8b 10.1.1.28'], port_security=['fa:16:3e:1a:e2:8b 10.1.1.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-164481673', 'neutron:cidrs': '10.1.1.28/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-164481673', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4a7ebb69-572b-44c7-b6ca-83fcf8475427', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=580c06c8-3541-4c20-b8a7-c02c5f9efe4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.652 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 kernel: tap341d2ffb-54 (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.6720] device (tap341d2ffb-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00449|binding|INFO|Releasing lport d87e0ecc-102f-4db6-b802-352374722987 from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00450|binding|INFO|Setting lport d87e0ecc-102f-4db6-b802-352374722987 down in Southbound
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00451|binding|INFO|Removing iface tapd87e0ecc-10 ovn-installed in OVS
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.682 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.688 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.690 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:61:ae 10.1.1.203'], port_security=['fa:16:3e:dd:61:ae 10.1.1.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.203/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d87e0ecc-102f-4db6-b802-352374722987) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 kernel: tap120af4a6-f9 (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.7015] device (tap120af4a6-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [NOTICE]   (276286) : haproxy version is 2.8.14-c23fe91
Nov 29 03:17:39 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [NOTICE]   (276286) : path to executable is /usr/sbin/haproxy
Nov 29 03:17:39 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [WARNING]  (276286) : Exiting Master process...
Nov 29 03:17:39 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [ALERT]    (276286) : Current worker (276288) exited with code 143 (Terminated)
Nov 29 03:17:39 np0005539552 neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6[276282]: [WARNING]  (276286) : All workers exited. Exiting... (0)
Nov 29 03:17:39 np0005539552 systemd[1]: libpod-8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832.scope: Deactivated successfully.
Nov 29 03:17:39 np0005539552 kernel: tapb25ceb20-79 (unregistering): left promiscuous mode
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.7284] device (tapb25ceb20-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:39 np0005539552 podman[278875]: 2025-11-29 08:17:39.728589159 +0000 UTC m=+0.051836013 container died 8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.731 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00452|binding|INFO|Releasing lport 120af4a6-f9dd-4b7a-8756-ec894c6253de from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00453|binding|INFO|Setting lport 120af4a6-f9dd-4b7a-8756-ec894c6253de down in Southbound
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00454|binding|INFO|Releasing lport 341d2ffb-54ae-4f73-b5fa-028f5d68084c from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00455|binding|INFO|Setting lport 341d2ffb-54ae-4f73-b5fa-028f5d68084c down in Southbound
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00456|binding|INFO|Removing iface tap120af4a6-f9 ovn-installed in OVS
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00457|binding|INFO|Removing iface tap341d2ffb-54 ovn-installed in OVS
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.747 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.757 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.758 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:50:8c 10.2.2.100'], port_security=['fa:16:3e:56:50:8c 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22531800-2588-4563-9214-766267df7c54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d562832a-41c1-42fd-9b76-af9eebcd14e7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=120af4a6-f9dd-4b7a-8756-ec894c6253de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.759 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:5b:43 10.1.1.209'], port_security=['fa:16:3e:0a:5b:43 10.1.1.209'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.209/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69989adc-a022-484c-921c-4ddb36b3b0a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a9ce2a-5d12-4041-9945-ab4e61b9a63e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=341d2ffb-54ae-4f73-b5fa-028f5d68084c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832-userdata-shm.mount: Deactivated successfully.
Nov 29 03:17:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay-9aa4b1aa92590b50fd6581823e75dbe61f786a9991f1fa810e3ece2f8a9035bf-merged.mount: Deactivated successfully.
Nov 29 03:17:39 np0005539552 podman[278875]: 2025-11-29 08:17:39.780754591 +0000 UTC m=+0.104001435 container cleanup 8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:17:39 np0005539552 systemd[1]: libpod-conmon-8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832.scope: Deactivated successfully.
Nov 29 03:17:39 np0005539552 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 29 03:17:39 np0005539552 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000066.scope: Consumed 18.038s CPU time.
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00458|binding|INFO|Releasing lport b25ceb20-79ac-43b0-8487-65bcf31a0a2f from this chassis (sb_readonly=0)
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00459|binding|INFO|Setting lport b25ceb20-79ac-43b0-8487-65bcf31a0a2f down in Southbound
Nov 29 03:17:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:39Z|00460|binding|INFO|Removing iface tapb25ceb20-79 ovn-installed in OVS
Nov 29 03:17:39 np0005539552 podman[278924]: 2025-11-29 08:17:39.839206291 +0000 UTC m=+0.036984644 container remove 8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.837 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 systemd-machined[196379]: Machine qemu-43-instance-00000066 terminated.
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.841 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.847 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[183a01b8-0385-4321-ad54-416e387c2579]: (4, ('Sat Nov 29 08:17:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6 (8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832)\n8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832\nSat Nov 29 08:17:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6 (8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832)\n8303456657ef739a79c39b4c313d59cb274fd46d8eb32a783d195af8a616e832\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.848 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[84abd987-7e6f-43f6-a696-63584f42462b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.849 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cbd3709-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:39 np0005539552 kernel: tap9cbd3709-40: left promiscuous mode
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.852 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.855 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.859 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:2b:da 10.2.2.200'], port_security=['fa:16:3e:42:2b:da 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'ad6070e8-74bc-4df7-9c2d-5da5da175238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22531800-2588-4563-9214-766267df7c54', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e6235234e63419ead82cbd9a07d500f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8357ffa4-accb-4ad8-8d09-5b268fa29af8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d562832a-41c1-42fd-9b76-af9eebcd14e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=b25ceb20-79ac-43b0-8487-65bcf31a0a2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:39 np0005539552 NetworkManager[48926]: <info>  [1764404259.8877] manager: (tap082c8475-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/217)
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.890 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.894 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[621881af-2752-4ff4-86fa-ee3b606766c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.906 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b385666f-0e36-4d68-80b7-3d2fdfdcee81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.907 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[51cb6328-ad07-45e0-aaad-d9e558772d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.922 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[18f4c2f8-05eb-4550-93d3-aa6cc0b49e70]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727558, 'reachable_time': 28710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278967, 'error': None, 'target': 'ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 systemd[1]: run-netns-ovnmeta\x2d9cbd3709\x2d4a58\x2d47d0\x2db193\x2d4d753a5463a6.mount: Deactivated successfully.
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.929 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9cbd3709-4a58-47d0-b193-4d753a5463a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.929 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cb8cd0-57d0-45d1-b28c-eb31cef6e600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.930 143400 INFO neutron.agent.ovn.metadata.agent [-] Port eb187f09-9e48-4b8c-9111-744004bcec05 in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.931 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69989adc-a022-484c-921c-4ddb36b3b0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.932 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e9a9e7-1b0b-425d-a52b-32d6fc52b452]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:39.932 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9 namespace which is not needed anymore#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.973 233728 DEBUG nova.compute.manager [req-96d05456-8a64-4289-94fc-3e2f0ee22590 req-c3ac36b0-da4a-4f60-9358-412e931d370b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.973 233728 DEBUG oslo_concurrency.lockutils [req-96d05456-8a64-4289-94fc-3e2f0ee22590 req-c3ac36b0-da4a-4f60-9358-412e931d370b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.974 233728 DEBUG oslo_concurrency.lockutils [req-96d05456-8a64-4289-94fc-3e2f0ee22590 req-c3ac36b0-da4a-4f60-9358-412e931d370b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.974 233728 DEBUG oslo_concurrency.lockutils [req-96d05456-8a64-4289-94fc-3e2f0ee22590 req-c3ac36b0-da4a-4f60-9358-412e931d370b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.974 233728 DEBUG nova.compute.manager [req-96d05456-8a64-4289-94fc-3e2f0ee22590 req-c3ac36b0-da4a-4f60-9358-412e931d370b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-unplugged-082c8475-1c86-43c9-b68c-dafbd502311e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.974 233728 DEBUG nova.compute.manager [req-96d05456-8a64-4289-94fc-3e2f0ee22590 req-c3ac36b0-da4a-4f60-9358-412e931d370b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-082c8475-1c86-43c9-b68c-dafbd502311e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.995 233728 INFO nova.virt.libvirt.driver [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance destroyed successfully.#033[00m
Nov 29 03:17:39 np0005539552 nova_compute[233724]: 2025-11-29 08:17:39.996 233728 DEBUG nova.objects.instance [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lazy-loading 'resources' on Instance uuid ad6070e8-74bc-4df7-9c2d-5da5da175238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.016 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.016 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.017 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.017 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.018 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.019 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap082c8475-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.020 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.022 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.035 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.038 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:be:93,bridge_name='br-int',has_traffic_filtering=True,id=082c8475-1c86-43c9-b68c-dafbd502311e,network=Network(9cbd3709-4a58-47d0-b193-4d753a5463a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082c8475-1c')#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.039 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.039 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.040 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.040 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.041 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.042 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb187f09-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.043 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.045 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.054 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.056 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ef:da,bridge_name='br-int',has_traffic_filtering=True,id=eb187f09-9e48-4b8c-9111-744004bcec05,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapeb187f09-9e')#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.057 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [NOTICE]   (276129) : haproxy version is 2.8.14-c23fe91
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [NOTICE]   (276129) : path to executable is /usr/sbin/haproxy
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [WARNING]  (276129) : Exiting Master process...
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.057 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.058 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.058 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [ALERT]    (276129) : Current worker (276131) exited with code 143 (Terminated)
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9[276125]: [WARNING]  (276129) : All workers exited. Exiting... (0)
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.059 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.060 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580c06c8-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.061 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 systemd[1]: libpod-870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c.scope: Deactivated successfully.
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.063 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 podman[279052]: 2025-11-29 08:17:40.068482541 +0000 UTC m=+0.045546955 container died 870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.075 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e2:8b,bridge_name='br-int',has_traffic_filtering=True,id=580c06c8-3541-4c20-b8a7-c02c5f9efe4b,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap580c06c8-35')#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.076 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.076 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.077 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.077 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.079 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd87e0ecc-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.081 233728 DEBUG nova.compute.manager [req-3169028d-9324-4414-bef0-a21fda3626ca req-96e54908-4601-40de-a261-4663696fe760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-d87e0ecc-102f-4db6-b802-352374722987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.082 233728 DEBUG oslo_concurrency.lockutils [req-3169028d-9324-4414-bef0-a21fda3626ca req-96e54908-4601-40de-a261-4663696fe760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.082 233728 DEBUG oslo_concurrency.lockutils [req-3169028d-9324-4414-bef0-a21fda3626ca req-96e54908-4601-40de-a261-4663696fe760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.082 233728 DEBUG oslo_concurrency.lockutils [req-3169028d-9324-4414-bef0-a21fda3626ca req-96e54908-4601-40de-a261-4663696fe760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.082 233728 DEBUG nova.compute.manager [req-3169028d-9324-4414-bef0-a21fda3626ca req-96e54908-4601-40de-a261-4663696fe760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-unplugged-d87e0ecc-102f-4db6-b802-352374722987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.083 233728 DEBUG nova.compute.manager [req-3169028d-9324-4414-bef0-a21fda3626ca req-96e54908-4601-40de-a261-4663696fe760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-d87e0ecc-102f-4db6-b802-352374722987 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.083 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.084 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.088 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.090 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:61:ae,bridge_name='br-int',has_traffic_filtering=True,id=d87e0ecc-102f-4db6-b802-352374722987,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd87e0ecc-10')#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.090 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.091 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.091 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.092 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.093 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.093 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap341d2ffb-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 systemd[1]: var-lib-containers-storage-overlay-6d98abae8fdccedb042a653205b031ab42c7661223bbc77a35eefda08c68592a-merged.mount: Deactivated successfully.
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.095 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.102 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.104 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:5b:43,bridge_name='br-int',has_traffic_filtering=True,id=341d2ffb-54ae-4f73-b5fa-028f5d68084c,network=Network(69989adc-a022-484c-921c-4ddb36b3b0a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d2ffb-54')#033[00m
Nov 29 03:17:40 np0005539552 podman[279052]: 2025-11-29 08:17:40.105163237 +0000 UTC m=+0.082227651 container cleanup 870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.105 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.105 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.106 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.106 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.107 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.108 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap120af4a6-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.111 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 systemd[1]: libpod-conmon-870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c.scope: Deactivated successfully.
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.119 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:50:8c,bridge_name='br-int',has_traffic_filtering=True,id=120af4a6-f9dd-4b7a-8756-ec894c6253de,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap120af4a6-f9')#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.119 233728 DEBUG nova.virt.libvirt.vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-294430620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-294430620',id=102,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAO7RYcvoKKbeCNJWF2AQ95TNCsqK6uMqyPlBh/hzt9hd2A7t9lApW32VsjJK/EoYsg5Sa2rEdj+spCWxKDVwvc4e1llqE5HacHIZ9OjbL+s968KEhF8bE6BdLOxIR6jww==',key_name='tempest-keypair-2058686798',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e6235234e63419ead82cbd9a07d500f',ramdisk_id='',reservation_id='r-hsa7naig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1166451646',owner_user_name='tempest-TaggedBootDevicesTest-1166451646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2646d924f10246c98f4ee29d496eb0f3',uuid=ad6070e8-74bc-4df7-9c2d-5da5da175238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.120 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converting VIF {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.120 233728 DEBUG nova.network.os_vif_util [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.121 233728 DEBUG os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.122 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.122 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb25ceb20-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.127 233728 INFO os_vif [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:2b:da,bridge_name='br-int',has_traffic_filtering=True,id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f,network=Network(22531800-2588-4563-9214-766267df7c54),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb25ceb20-79')#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.165 233728 DEBUG nova.compute.manager [req-b55a3e57-031a-4b89-915a-9ca45b887d85 req-bbed75e9-9494-4139-ae4f-9f41cd98e799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-eb187f09-9e48-4b8c-9111-744004bcec05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:40 np0005539552 podman[279103]: 2025-11-29 08:17:40.170701847 +0000 UTC m=+0.044276330 container remove 870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.165 233728 DEBUG oslo_concurrency.lockutils [req-b55a3e57-031a-4b89-915a-9ca45b887d85 req-bbed75e9-9494-4139-ae4f-9f41cd98e799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.172 233728 DEBUG oslo_concurrency.lockutils [req-b55a3e57-031a-4b89-915a-9ca45b887d85 req-bbed75e9-9494-4139-ae4f-9f41cd98e799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.172 233728 DEBUG oslo_concurrency.lockutils [req-b55a3e57-031a-4b89-915a-9ca45b887d85 req-bbed75e9-9494-4139-ae4f-9f41cd98e799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.172 233728 DEBUG nova.compute.manager [req-b55a3e57-031a-4b89-915a-9ca45b887d85 req-bbed75e9-9494-4139-ae4f-9f41cd98e799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-unplugged-eb187f09-9e48-4b8c-9111-744004bcec05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.172 233728 DEBUG nova.compute.manager [req-b55a3e57-031a-4b89-915a-9ca45b887d85 req-bbed75e9-9494-4139-ae4f-9f41cd98e799 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-eb187f09-9e48-4b8c-9111-744004bcec05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.185 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cba93943-aae9-4ba3-bd52-22eb98740fc6]: (4, ('Sat Nov 29 08:17:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9 (870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c)\n870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c\nSat Nov 29 08:17:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9 (870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c)\n870c999d94bbd61f5e30d0cb78a603949607c0d27d94d9afcdbe2ade5c0a2a0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.187 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4fa7f5-5372-4cb8-8898-9bca6cabad40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.188 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69989adc-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.190 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 kernel: tap69989adc-a0: left promiscuous mode
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.195 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6da03cf3-4ce5-48df-a8ec-832e627f87da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.207 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.211 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a71f0d8f-1415-4db8-8614-27398d3eaf46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.212 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[73fb5e9f-b148-4a83-a555-ec50db2f9429]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.226 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfbe9a4-ee35-41ac-a3ef-7dc567e79935]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727472, 'reachable_time': 17224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279136, 'error': None, 'target': 'ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.229 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69989adc-a022-484c-921c-4ddb36b3b0a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.229 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[1e120d21-034e-45ea-a0f6-df9ee15e8161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.229 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 580c06c8-3541-4c20-b8a7-c02c5f9efe4b in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.231 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69989adc-a022-484c-921c-4ddb36b3b0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.232 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[173b3bc3-38b0-456e-95ae-c85ebfc90197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.232 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d87e0ecc-102f-4db6-b802-352374722987 in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.234 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69989adc-a022-484c-921c-4ddb36b3b0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.234 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a3956bf8-8696-42ef-896b-f0956bca25a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.235 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 120af4a6-f9dd-4b7a-8756-ec894c6253de in datapath 22531800-2588-4563-9214-766267df7c54 unbound from our chassis#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.236 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22531800-2588-4563-9214-766267df7c54, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.236 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e1f0bd-313b-46b7-80d1-098610a26a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.237 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22531800-2588-4563-9214-766267df7c54 namespace which is not needed anymore#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.348 233728 INFO nova.virt.libvirt.driver [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Deleting instance files /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238_del#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.349 233728 INFO nova.virt.libvirt.driver [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Deletion of /var/lib/nova/instances/ad6070e8-74bc-4df7-9c2d-5da5da175238_del complete#033[00m
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [NOTICE]   (276366) : haproxy version is 2.8.14-c23fe91
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [NOTICE]   (276366) : path to executable is /usr/sbin/haproxy
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [WARNING]  (276366) : Exiting Master process...
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [ALERT]    (276366) : Current worker (276368) exited with code 143 (Terminated)
Nov 29 03:17:40 np0005539552 neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54[276362]: [WARNING]  (276366) : All workers exited. Exiting... (0)
Nov 29 03:17:40 np0005539552 systemd[1]: libpod-418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55.scope: Deactivated successfully.
Nov 29 03:17:40 np0005539552 podman[279154]: 2025-11-29 08:17:40.388478608 +0000 UTC m=+0.067550865 container died 418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.417 233728 INFO nova.compute.manager [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.417 233728 DEBUG oslo.service.loopingcall [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.418 233728 DEBUG nova.compute.manager [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.418 233728 DEBUG nova.network.neutron [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:40 np0005539552 podman[279154]: 2025-11-29 08:17:40.432952783 +0000 UTC m=+0.112025030 container cleanup 418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:17:40 np0005539552 systemd[1]: libpod-conmon-418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55.scope: Deactivated successfully.
Nov 29 03:17:40 np0005539552 podman[279181]: 2025-11-29 08:17:40.516898039 +0000 UTC m=+0.053026146 container remove 418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.530 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0d29bd4a-9f9b-4e3a-bc46-832c3e545c63]: (4, ('Sat Nov 29 08:17:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54 (418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55)\n418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55\nSat Nov 29 08:17:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22531800-2588-4563-9214-766267df7c54 (418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55)\n418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.532 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f0279bf3-a973-41ea-9e30-a5668aa6cc6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.534 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22531800-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 kernel: tap22531800-20: left promiscuous mode
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.539 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[93fa0fa6-83d3-40e2-8903-bf9dc02d97c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 nova_compute[233724]: 2025-11-29 08:17:40.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.560 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[942ab2a3-8081-448c-a41e-cd26be30f5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.563 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[055118eb-357f-4ee8-9a35-0abee9fa7358]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.581 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[768b2238-7e54-4ddd-9f77-43cb756ed9ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727653, 'reachable_time': 34161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279196, 'error': None, 'target': 'ovnmeta-22531800-2588-4563-9214-766267df7c54', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.583 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22531800-2588-4563-9214-766267df7c54 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.583 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6256b4-ca2a-4627-b252-557c71ba17b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.584 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 341d2ffb-54ae-4f73-b5fa-028f5d68084c in datapath 69989adc-a022-484c-921c-4ddb36b3b0a9 unbound from our chassis#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.586 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69989adc-a022-484c-921c-4ddb36b3b0a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.586 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[63984a13-7605-48b6-950b-5a70f928b1f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.587 143400 INFO neutron.agent.ovn.metadata.agent [-] Port b25ceb20-79ac-43b0-8487-65bcf31a0a2f in datapath 22531800-2588-4563-9214-766267df7c54 unbound from our chassis#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.588 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22531800-2588-4563-9214-766267df7c54, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:17:40.589 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[487a6046-ab10-439e-8f15-917e3b4b586e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:40 np0005539552 systemd[1]: var-lib-containers-storage-overlay-85509d81a220e6ca70ce8b1f63fd113458b3128f50a35d0a2a539cba8cf6f4fb-merged.mount: Deactivated successfully.
Nov 29 03:17:40 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-418d0554d14d9686847b79cbc98609e3dcb766f58ec336f895e6dde0a9ff2e55-userdata-shm.mount: Deactivated successfully.
Nov 29 03:17:40 np0005539552 systemd[1]: run-netns-ovnmeta\x2d22531800\x2d2588\x2d4563\x2d9214\x2d766267df7c54.mount: Deactivated successfully.
Nov 29 03:17:40 np0005539552 systemd[1]: run-netns-ovnmeta\x2d69989adc\x2da022\x2d484c\x2d921c\x2d4ddb36b3b0a9.mount: Deactivated successfully.
Nov 29 03:17:41 np0005539552 nova_compute[233724]: 2025-11-29 08:17:41.144 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:41.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.864696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261864783, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2115, "num_deletes": 257, "total_data_size": 4878249, "memory_usage": 4959168, "flush_reason": "Manual Compaction"}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261887917, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 3153680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43802, "largest_seqno": 45911, "table_properties": {"data_size": 3144977, "index_size": 5261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18918, "raw_average_key_size": 20, "raw_value_size": 3127233, "raw_average_value_size": 3388, "num_data_blocks": 228, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404104, "oldest_key_time": 1764404104, "file_creation_time": 1764404261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 23267 microseconds, and 11628 cpu microseconds.
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.887964) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 3153680 bytes OK
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.887985) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.890190) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.890205) EVENT_LOG_v1 {"time_micros": 1764404261890200, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.890223) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 4868647, prev total WAL file size 4868647, number of live WAL files 2.
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.891474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323536' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(3079KB)], [84(10028KB)]
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261891508, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 13423226, "oldest_snapshot_seqno": -1}
Nov 29 03:17:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:41.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 7805 keys, 13254820 bytes, temperature: kUnknown
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261966692, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 13254820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13201140, "index_size": 33088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 201627, "raw_average_key_size": 25, "raw_value_size": 13060072, "raw_average_value_size": 1673, "num_data_blocks": 1310, "num_entries": 7805, "num_filter_entries": 7805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.966943) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 13254820 bytes
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.968413) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.4 rd, 176.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(8.5) write-amplify(4.2) OK, records in: 8339, records dropped: 534 output_compression: NoCompression
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.968487) EVENT_LOG_v1 {"time_micros": 1764404261968448, "job": 52, "event": "compaction_finished", "compaction_time_micros": 75256, "compaction_time_cpu_micros": 33544, "output_level": 6, "num_output_files": 1, "total_output_size": 13254820, "num_input_records": 8339, "num_output_records": 7805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261969445, "job": 52, "event": "table_file_deletion", "file_number": 86}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261971154, "job": 52, "event": "table_file_deletion", "file_number": 84}
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.891390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.971201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.971205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.971206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.971208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:41 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:17:41.971209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.071 233728 DEBUG nova.compute.manager [req-cde0a292-991e-4054-9e71-376a61275e1e req-fdb7c211-20d7-47a0-b514-dbe6bf20b0d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.072 233728 DEBUG oslo_concurrency.lockutils [req-cde0a292-991e-4054-9e71-376a61275e1e req-fdb7c211-20d7-47a0-b514-dbe6bf20b0d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.072 233728 DEBUG oslo_concurrency.lockutils [req-cde0a292-991e-4054-9e71-376a61275e1e req-fdb7c211-20d7-47a0-b514-dbe6bf20b0d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.072 233728 DEBUG oslo_concurrency.lockutils [req-cde0a292-991e-4054-9e71-376a61275e1e req-fdb7c211-20d7-47a0-b514-dbe6bf20b0d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.072 233728 DEBUG nova.compute.manager [req-cde0a292-991e-4054-9e71-376a61275e1e req-fdb7c211-20d7-47a0-b514-dbe6bf20b0d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.073 233728 WARNING nova.compute.manager [req-cde0a292-991e-4054-9e71-376a61275e1e req-fdb7c211-20d7-47a0-b514-dbe6bf20b0d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-082c8475-1c86-43c9-b68c-dafbd502311e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.156 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "d87e0ecc-102f-4db6-b802-352374722987", "address": "fa:16:3e:dd:61:ae", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd87e0ecc-10", "ovs_interfaceid": "d87e0ecc-102f-4db6-b802-352374722987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.179 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.179 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 WARNING nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-d87e0ecc-102f-4db6-b802-352374722987 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.180 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-unplugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.181 233728 DEBUG oslo_concurrency.lockutils [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.182 233728 DEBUG nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.182 233728 WARNING nova.compute.manager [req-d933e8db-dd49-4212-ba30-6e76cc463a2d req-9546a647-4f5d-40a6-9b66-982e7208ae5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-b25ceb20-79ac-43b0-8487-65bcf31a0a2f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.209 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-ad6070e8-74bc-4df7-9c2d-5da5da175238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.209 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.209 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.210 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.210 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.325 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.325 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.325 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.326 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.326 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.326 233728 WARNING nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-eb187f09-9e48-4b8c-9111-744004bcec05 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.326 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-120af4a6-f9dd-4b7a-8756-ec894c6253de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.327 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.327 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.327 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.327 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-unplugged-120af4a6-f9dd-4b7a-8756-ec894c6253de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.327 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-120af4a6-f9dd-4b7a-8756-ec894c6253de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.328 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.328 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.328 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.328 233728 DEBUG oslo_concurrency.lockutils [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.328 233728 DEBUG nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:42 np0005539552 nova_compute[233724]: 2025-11-29 08:17:42.329 233728 WARNING nova.compute.manager [req-803d7b3b-a6e1-441e-92be-dc806c6f6504 req-9550a061-c9da-4cce-a753-f8909d33d700 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-120af4a6-f9dd-4b7a-8756-ec894c6253de for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:43 np0005539552 nova_compute[233724]: 2025-11-29 08:17:43.204 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:43.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.178 233728 DEBUG nova.compute.manager [req-e5be4839-bccf-40df-aef0-806abca5abbe req-59787357-d8a7-42f1-a651-d6fe7e93f5fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-deleted-d87e0ecc-102f-4db6-b802-352374722987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.179 233728 INFO nova.compute.manager [req-e5be4839-bccf-40df-aef0-806abca5abbe req-59787357-d8a7-42f1-a651-d6fe7e93f5fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Neutron deleted interface d87e0ecc-102f-4db6-b802-352374722987; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.180 233728 DEBUG nova.network.neutron [req-e5be4839-bccf-40df-aef0-806abca5abbe req-59787357-d8a7-42f1-a651-d6fe7e93f5fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "082c8475-1c86-43c9-b68c-dafbd502311e", "address": "fa:16:3e:bf:be:93", "network": {"id": "9cbd3709-4a58-47d0-b193-4d753a5463a6", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-537724310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082c8475-1c", "ovs_interfaceid": "082c8475-1c86-43c9-b68c-dafbd502311e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.231 233728 DEBUG nova.compute.manager [req-e5be4839-bccf-40df-aef0-806abca5abbe req-59787357-d8a7-42f1-a651-d6fe7e93f5fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Detach interface failed, port_id=d87e0ecc-102f-4db6-b802-352374722987, reason: Instance ad6070e8-74bc-4df7-9c2d-5da5da175238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.673 233728 DEBUG nova.compute.manager [req-a8ab5b8a-361f-426b-95ab-5ab920a67262 req-b6860415-86af-44ad-85e0-f39492ced3cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.674 233728 DEBUG oslo_concurrency.lockutils [req-a8ab5b8a-361f-426b-95ab-5ab920a67262 req-b6860415-86af-44ad-85e0-f39492ced3cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.674 233728 DEBUG oslo_concurrency.lockutils [req-a8ab5b8a-361f-426b-95ab-5ab920a67262 req-b6860415-86af-44ad-85e0-f39492ced3cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.675 233728 DEBUG oslo_concurrency.lockutils [req-a8ab5b8a-361f-426b-95ab-5ab920a67262 req-b6860415-86af-44ad-85e0-f39492ced3cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.675 233728 DEBUG nova.compute.manager [req-a8ab5b8a-361f-426b-95ab-5ab920a67262 req-b6860415-86af-44ad-85e0-f39492ced3cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-unplugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:44 np0005539552 nova_compute[233724]: 2025-11-29 08:17:44.675 233728 DEBUG nova.compute.manager [req-a8ab5b8a-361f-426b-95ab-5ab920a67262 req-b6860415-86af-44ad-85e0-f39492ced3cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-unplugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:45 np0005539552 nova_compute[233724]: 2025-11-29 08:17:45.168 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:45.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:45 np0005539552 nova_compute[233724]: 2025-11-29 08:17:45.748 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.528 233728 DEBUG nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-deleted-082c8475-1c86-43c9-b68c-dafbd502311e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.528 233728 INFO nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Neutron deleted interface 082c8475-1c86-43c9-b68c-dafbd502311e; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.529 233728 DEBUG nova.network.neutron [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "address": "fa:16:3e:56:50:8c", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap120af4a6-f9", "ovs_interfaceid": "120af4a6-f9dd-4b7a-8756-ec894c6253de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.551 233728 DEBUG nova.network.neutron [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.562 233728 DEBUG nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Detach interface failed, port_id=082c8475-1c86-43c9-b68c-dafbd502311e, reason: Instance ad6070e8-74bc-4df7-9c2d-5da5da175238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.563 233728 DEBUG nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-deleted-120af4a6-f9dd-4b7a-8756-ec894c6253de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.564 233728 INFO nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Neutron deleted interface 120af4a6-f9dd-4b7a-8756-ec894c6253de; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.565 233728 DEBUG nova.network.neutron [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "address": "fa:16:3e:0a:5b:43", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.209", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d2ffb-54", "ovs_interfaceid": "341d2ffb-54ae-4f73-b5fa-028f5d68084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.571 233728 INFO nova.compute.manager [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Took 6.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.593 233728 DEBUG nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Detach interface failed, port_id=120af4a6-f9dd-4b7a-8756-ec894c6253de, reason: Instance ad6070e8-74bc-4df7-9c2d-5da5da175238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.594 233728 DEBUG nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-deleted-341d2ffb-54ae-4f73-b5fa-028f5d68084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.595 233728 INFO nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Neutron deleted interface 341d2ffb-54ae-4f73-b5fa-028f5d68084c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.596 233728 DEBUG nova.network.neutron [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Updating instance_info_cache with network_info: [{"id": "eb187f09-9e48-4b8c-9111-744004bcec05", "address": "fa:16:3e:b0:ef:da", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb187f09-9e", "ovs_interfaceid": "eb187f09-9e48-4b8c-9111-744004bcec05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "address": "fa:16:3e:1a:e2:8b", "network": {"id": "69989adc-a022-484c-921c-4ddb36b3b0a9", "bridge": "br-int", "label": "tempest-device-tagging-net1-852500129", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580c06c8-35", "ovs_interfaceid": "580c06c8-3541-4c20-b8a7-c02c5f9efe4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "address": "fa:16:3e:42:2b:da", "network": {"id": "22531800-2588-4563-9214-766267df7c54", "bridge": "br-int", "label": "tempest-device-tagging-net2-1207720522", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e6235234e63419ead82cbd9a07d500f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb25ceb20-79", "ovs_interfaceid": "b25ceb20-79ac-43b0-8487-65bcf31a0a2f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.898 233728 DEBUG nova.compute.manager [req-c3a6dbb2-78b8-4834-a18b-853d1a48778a req-d4139389-6642-41cd-9736-cc73d4bab01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Detach interface failed, port_id=341d2ffb-54ae-4f73-b5fa-028f5d68084c, reason: Instance ad6070e8-74bc-4df7-9c2d-5da5da175238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.964 233728 DEBUG nova.compute.manager [req-b23399f2-2a42-4380-b1f4-6799cdd0c4a1 req-71cbb998-5752-4ba1-bef4-15d0147bc4e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.965 233728 DEBUG oslo_concurrency.lockutils [req-b23399f2-2a42-4380-b1f4-6799cdd0c4a1 req-71cbb998-5752-4ba1-bef4-15d0147bc4e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.965 233728 DEBUG oslo_concurrency.lockutils [req-b23399f2-2a42-4380-b1f4-6799cdd0c4a1 req-71cbb998-5752-4ba1-bef4-15d0147bc4e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.965 233728 DEBUG oslo_concurrency.lockutils [req-b23399f2-2a42-4380-b1f4-6799cdd0c4a1 req-71cbb998-5752-4ba1-bef4-15d0147bc4e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.965 233728 DEBUG nova.compute.manager [req-b23399f2-2a42-4380-b1f4-6799cdd0c4a1 req-71cbb998-5752-4ba1-bef4-15d0147bc4e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] No waiting events found dispatching network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:46 np0005539552 nova_compute[233724]: 2025-11-29 08:17:46.965 233728 WARNING nova.compute.manager [req-b23399f2-2a42-4380-b1f4-6799cdd0c4a1 req-71cbb998-5752-4ba1-bef4-15d0147bc4e5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received unexpected event network-vif-plugged-341d2ffb-54ae-4f73-b5fa-028f5d68084c for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:47.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:47Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:17:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:47Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:17:47 np0005539552 nova_compute[233724]: 2025-11-29 08:17:47.691 233728 INFO nova.compute.manager [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Took 1.12 seconds to detach 3 volumes for instance.#033[00m
Nov 29 03:17:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:47 np0005539552 nova_compute[233724]: 2025-11-29 08:17:47.745 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:47 np0005539552 nova_compute[233724]: 2025-11-29 08:17:47.746 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:47 np0005539552 nova_compute[233724]: 2025-11-29 08:17:47.847 233728 DEBUG oslo_concurrency.processutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/172950468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.311 233728 DEBUG oslo_concurrency.processutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.318 233728 DEBUG nova.compute.provider_tree [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.333 233728 DEBUG nova.scheduler.client.report [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.354 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.387 233728 INFO nova.scheduler.client.report [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Deleted allocations for instance ad6070e8-74bc-4df7-9c2d-5da5da175238#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.574 233728 DEBUG oslo_concurrency.lockutils [None req-e5d88464-9781-4650-8324-fbc5a7c6d5e8 2646d924f10246c98f4ee29d496eb0f3 9e6235234e63419ead82cbd9a07d500f - - default default] Lock "ad6070e8-74bc-4df7-9c2d-5da5da175238" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.687 233728 DEBUG nova.compute.manager [req-a6351bf8-bfc4-4fe7-bd83-dd0b5f756ba8 req-b8b8effa-4b00-41bf-89e3-775e0360bcf9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Received event network-vif-deleted-b25ceb20-79ac-43b0-8487-65bcf31a0a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.687 233728 INFO nova.compute.manager [req-a6351bf8-bfc4-4fe7-bd83-dd0b5f756ba8 req-b8b8effa-4b00-41bf-89e3-775e0360bcf9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Neutron deleted interface b25ceb20-79ac-43b0-8487-65bcf31a0a2f; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.688 233728 DEBUG nova.network.neutron [req-a6351bf8-bfc4-4fe7-bd83-dd0b5f756ba8 req-b8b8effa-4b00-41bf-89e3-775e0360bcf9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:17:48 np0005539552 nova_compute[233724]: 2025-11-29 08:17:48.690 233728 DEBUG nova.compute.manager [req-a6351bf8-bfc4-4fe7-bd83-dd0b5f756ba8 req-b8b8effa-4b00-41bf-89e3-775e0360bcf9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Detach interface failed, port_id=b25ceb20-79ac-43b0-8487-65bcf31a0a2f, reason: Instance ad6070e8-74bc-4df7-9c2d-5da5da175238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:17:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:49.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:49 np0005539552 nova_compute[233724]: 2025-11-29 08:17:49.747 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:49.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:50 np0005539552 nova_compute[233724]: 2025-11-29 08:17:50.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:51 np0005539552 nova_compute[233724]: 2025-11-29 08:17:51.148 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:53.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:54 np0005539552 nova_compute[233724]: 2025-11-29 08:17:54.992 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404259.9907405, ad6070e8-74bc-4df7-9c2d-5da5da175238 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:54 np0005539552 nova_compute[233724]: 2025-11-29 08:17:54.993 233728 INFO nova.compute.manager [-] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:55 np0005539552 nova_compute[233724]: 2025-11-29 08:17:55.011 233728 DEBUG nova.compute.manager [None req-6a5ef4ea-9fa3-4ab2-9588-27e389afaaa6 - - - - - -] [instance: ad6070e8-74bc-4df7-9c2d-5da5da175238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:55 np0005539552 nova_compute[233724]: 2025-11-29 08:17:55.173 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:55.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:55.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539552 nova_compute[233724]: 2025-11-29 08:17:56.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 03:17:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:17:58Z|00461|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:17:58 np0005539552 nova_compute[233724]: 2025-11-29 08:17:58.433 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 03:17:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:17:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:59.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:00 np0005539552 nova_compute[233724]: 2025-11-29 08:18:00.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:01 np0005539552 nova_compute[233724]: 2025-11-29 08:18:01.152 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:02 np0005539552 podman[279283]: 2025-11-29 08:18:02.021938327 +0000 UTC m=+0.093690118 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:18:02 np0005539552 podman[279284]: 2025-11-29 08:18:02.036720734 +0000 UTC m=+0.110120429 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:18:02 np0005539552 podman[279285]: 2025-11-29 08:18:02.042478099 +0000 UTC m=+0.114483337 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:18:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:03.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:03.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:05 np0005539552 nova_compute[233724]: 2025-11-29 08:18:05.177 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:05.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:06 np0005539552 nova_compute[233724]: 2025-11-29 08:18:06.154 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:06Z|00462|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 03:18:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:07.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:07.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:09.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:09.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3972340860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3972340860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.179 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.274 233728 DEBUG oslo_concurrency.lockutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.275 233728 DEBUG oslo_concurrency.lockutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.275 233728 INFO nova.compute.manager [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Rebooting instance#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.318 233728 DEBUG oslo_concurrency.lockutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.319 233728 DEBUG oslo_concurrency.lockutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.319 233728 DEBUG nova.network.neutron [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:10.504 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:10 np0005539552 nova_compute[233724]: 2025-11-29 08:18:10.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:10.506 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:11 np0005539552 nova_compute[233724]: 2025-11-29 08:18:11.156 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:11.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:11.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:12 np0005539552 nova_compute[233724]: 2025-11-29 08:18:12.961 233728 DEBUG nova.network.neutron [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:13.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:13.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:14.508 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:14 np0005539552 nova_compute[233724]: 2025-11-29 08:18:14.511 233728 DEBUG oslo_concurrency.lockutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:14 np0005539552 nova_compute[233724]: 2025-11-29 08:18:14.512 233728 DEBUG nova.compute.manager [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:14 np0005539552 kernel: tap524180cf-27 (unregistering): left promiscuous mode
Nov 29 03:18:14 np0005539552 NetworkManager[48926]: <info>  [1764404294.8737] device (tap524180cf-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:14Z|00463|binding|INFO|Releasing lport 524180cf-279c-48d6-8bf1-04f8f159aef6 from this chassis (sb_readonly=0)
Nov 29 03:18:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:14Z|00464|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 down in Southbound
Nov 29 03:18:14 np0005539552 nova_compute[233724]: 2025-11-29 08:18:14.882 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:14Z|00465|binding|INFO|Removing iface tap524180cf-27 ovn-installed in OVS
Nov 29 03:18:14 np0005539552 nova_compute[233724]: 2025-11-29 08:18:14.884 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:14.896 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:14.897 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:18:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:14.899 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:14.900 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[71ffb461-0b42-4c77-b5fb-43b95ef02d8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:14.900 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:18:14 np0005539552 nova_compute[233724]: 2025-11-29 08:18:14.906 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:14 np0005539552 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:18:14 np0005539552 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006c.scope: Consumed 15.257s CPU time.
Nov 29 03:18:14 np0005539552 systemd-machined[196379]: Machine qemu-44-instance-0000006c terminated.
Nov 29 03:18:15 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [NOTICE]   (278777) : haproxy version is 2.8.14-c23fe91
Nov 29 03:18:15 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [NOTICE]   (278777) : path to executable is /usr/sbin/haproxy
Nov 29 03:18:15 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [WARNING]  (278777) : Exiting Master process...
Nov 29 03:18:15 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [ALERT]    (278777) : Current worker (278779) exited with code 143 (Terminated)
Nov 29 03:18:15 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[278773]: [WARNING]  (278777) : All workers exited. Exiting... (0)
Nov 29 03:18:15 np0005539552 systemd[1]: libpod-f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa.scope: Deactivated successfully.
Nov 29 03:18:15 np0005539552 podman[279375]: 2025-11-29 08:18:15.035697693 +0000 UTC m=+0.048451612 container died f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.066 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.071 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa-userdata-shm.mount: Deactivated successfully.
Nov 29 03:18:15 np0005539552 systemd[1]: var-lib-containers-storage-overlay-97e41e1f16ba7f430bc3c31e8be8bab7d700d7eb3f19d9b630f0abc1127b3690-merged.mount: Deactivated successfully.
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.084 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance destroyed successfully.#033[00m
Nov 29 03:18:15 np0005539552 podman[279375]: 2025-11-29 08:18:15.085251615 +0000 UTC m=+0.098005534 container cleanup f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.084 233728 DEBUG nova.objects.instance [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:15 np0005539552 systemd[1]: libpod-conmon-f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa.scope: Deactivated successfully.
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.112 233728 DEBUG nova.virt.libvirt.vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.113 233728 DEBUG nova.network.os_vif_util [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.114 233728 DEBUG nova.network.os_vif_util [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.115 233728 DEBUG os_vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.116 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.117 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap524180cf-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.118 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.119 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.122 233728 INFO os_vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.130 233728 DEBUG nova.virt.libvirt.driver [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start _get_guest_xml network_info=[{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.135 233728 WARNING nova.virt.libvirt.driver [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.150 233728 DEBUG nova.virt.libvirt.host [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.151 233728 DEBUG nova.virt.libvirt.host [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.156 233728 DEBUG nova.virt.libvirt.host [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.156 233728 DEBUG nova.virt.libvirt.host [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.157 233728 DEBUG nova.virt.libvirt.driver [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.157 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.158 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.158 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:15 np0005539552 podman[279414]: 2025-11-29 08:18:15.158463821 +0000 UTC m=+0.048479104 container remove f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.158 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.158 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.159 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.159 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.159 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.159 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.159 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.160 233728 DEBUG nova.virt.hardware [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.160 233728 DEBUG nova.objects.instance [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.165 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6f52e81e-b9c8-4b4e-9b5a-2ede661f5da5]: (4, ('Sat Nov 29 08:18:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa)\nf123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa\nSat Nov 29 08:18:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (f123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa)\nf123c61cd04c6721c46044acfdc9ef4e61dac1c912d498a016f3be4a5aaa5dfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.166 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[44f5379c-bc61-4605-858c-57441a304b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.167 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.178 233728 DEBUG oslo_concurrency.processutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.186 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eadfaeca-6b55-4034-954b-eddcc70e9e91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.205 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a46d3cd8-0052-4626-be82-7bc5f8fb0abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.205 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.206 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a97353-7fc5-4104-9373-4b7df78f1803]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.223 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f02f452-0808-460f-a865-bcccb28f7c97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729739, 'reachable_time': 36904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279430, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.226 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:18:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:15.226 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[43565c48-0679-4d51-8190-5f16ab0fb588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:15 np0005539552 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:18:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/344853702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.639 233728 DEBUG oslo_concurrency.processutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:15 np0005539552 nova_compute[233724]: 2025-11-29 08:18:15.678 233728 DEBUG oslo_concurrency.processutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1025108438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:16 np0005539552 nova_compute[233724]: 2025-11-29 08:18:16.158 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:16 np0005539552 nova_compute[233724]: 2025-11-29 08:18:16.267 233728 DEBUG oslo_concurrency.processutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:16 np0005539552 nova_compute[233724]: 2025-11-29 08:18:16.269 233728 DEBUG nova.virt.libvirt.vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:16 np0005539552 nova_compute[233724]: 2025-11-29 08:18:16.270 233728 DEBUG nova.network.os_vif_util [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:16 np0005539552 nova_compute[233724]: 2025-11-29 08:18:16.271 233728 DEBUG nova.network.os_vif_util [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:16 np0005539552 nova_compute[233724]: 2025-11-29 08:18:16.272 233728 DEBUG nova.objects.instance [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:16.805894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296805951, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 605, "num_deletes": 252, "total_data_size": 922431, "memory_usage": 933448, "flush_reason": "Manual Compaction"}
Nov 29 03:18:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404297081341, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 607843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45916, "largest_seqno": 46516, "table_properties": {"data_size": 604733, "index_size": 1018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7508, "raw_average_key_size": 19, "raw_value_size": 598457, "raw_average_value_size": 1554, "num_data_blocks": 45, "num_entries": 385, "num_filter_entries": 385, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404262, "oldest_key_time": 1764404262, "file_creation_time": 1764404296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 275525 microseconds, and 2140 cpu microseconds.
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.081415) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 607843 bytes OK
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.081441) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.084829) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.084883) EVENT_LOG_v1 {"time_micros": 1764404297084871, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.084912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 918988, prev total WAL file size 918988, number of live WAL files 2.
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.085910) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(593KB)], [87(12MB)]
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404297085951, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13862663, "oldest_snapshot_seqno": -1}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7674 keys, 11981206 bytes, temperature: kUnknown
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404297167270, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11981206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11929432, "index_size": 31502, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 199679, "raw_average_key_size": 26, "raw_value_size": 11791525, "raw_average_value_size": 1536, "num_data_blocks": 1237, "num_entries": 7674, "num_filter_entries": 7674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.167549) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11981206 bytes
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.190355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.3 rd, 147.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(42.5) write-amplify(19.7) OK, records in: 8190, records dropped: 516 output_compression: NoCompression
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.190403) EVENT_LOG_v1 {"time_micros": 1764404297190386, "job": 54, "event": "compaction_finished", "compaction_time_micros": 81406, "compaction_time_cpu_micros": 35324, "output_level": 6, "num_output_files": 1, "total_output_size": 11981206, "num_input_records": 8190, "num_output_records": 7674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404297190837, "job": 54, "event": "table_file_deletion", "file_number": 89}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404297194901, "job": 54, "event": "table_file_deletion", "file_number": 87}
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.085801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.194970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.194975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.194977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.194978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:18:17.194980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.397 233728 DEBUG nova.compute.manager [req-03157fb4-8a53-47d5-bc17-bec6dbd5bf33 req-3eb0fa3a-c177-4b9c-8024-fb1e1b84cd0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.397 233728 DEBUG oslo_concurrency.lockutils [req-03157fb4-8a53-47d5-bc17-bec6dbd5bf33 req-3eb0fa3a-c177-4b9c-8024-fb1e1b84cd0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.397 233728 DEBUG oslo_concurrency.lockutils [req-03157fb4-8a53-47d5-bc17-bec6dbd5bf33 req-3eb0fa3a-c177-4b9c-8024-fb1e1b84cd0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.398 233728 DEBUG oslo_concurrency.lockutils [req-03157fb4-8a53-47d5-bc17-bec6dbd5bf33 req-3eb0fa3a-c177-4b9c-8024-fb1e1b84cd0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.398 233728 DEBUG nova.compute.manager [req-03157fb4-8a53-47d5-bc17-bec6dbd5bf33 req-3eb0fa3a-c177-4b9c-8024-fb1e1b84cd0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.398 233728 WARNING nova.compute.manager [req-03157fb4-8a53-47d5-bc17-bec6dbd5bf33 req-3eb0fa3a-c177-4b9c-8024-fb1e1b84cd0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.419 233728 DEBUG nova.virt.libvirt.driver [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <uuid>258dfc76-0ea9-4521-a3fc-5d64b3632451</uuid>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <name>instance-0000006c</name>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestJSON-server-1950416616</nova:name>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:18:15</nova:creationTime>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <nova:port uuid="524180cf-279c-48d6-8bf1-04f8f159aef6">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <entry name="serial">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <entry name="uuid">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b8:49:96"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <target dev="tap524180cf-27"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log" append="off"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:18:17 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:18:17 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:18:17 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:18:17 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.420 233728 DEBUG nova.virt.libvirt.driver [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.420 233728 DEBUG nova.virt.libvirt.driver [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.421 233728 DEBUG nova.virt.libvirt.vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.422 233728 DEBUG nova.network.os_vif_util [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.422 233728 DEBUG nova.network.os_vif_util [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.423 233728 DEBUG os_vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.424 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.425 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.428 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.428 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap524180cf-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.429 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap524180cf-27, col_values=(('external_ids', {'iface-id': '524180cf-279c-48d6-8bf1-04f8f159aef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:49:96', 'vm-uuid': '258dfc76-0ea9-4521-a3fc-5d64b3632451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.430 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.4316] manager: (tap524180cf-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.433 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.436 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.437 233728 INFO os_vif [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:18:17 np0005539552 kernel: tap524180cf-27: entered promiscuous mode
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.5088] manager: (tap524180cf-27): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Nov 29 03:18:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:17Z|00466|binding|INFO|Claiming lport 524180cf-279c-48d6-8bf1-04f8f159aef6 for this chassis.
Nov 29 03:18:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:17Z|00467|binding|INFO|524180cf-279c-48d6-8bf1-04f8f159aef6: Claiming fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:18:17 np0005539552 systemd-udevd[279355]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.510 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.518 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.519 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.521 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.5246] device (tap524180cf-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.5256] device (tap524180cf-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:17Z|00468|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 ovn-installed in OVS
Nov 29 03:18:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:17Z|00469|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 up in Southbound
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.528 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.534 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf51353-8f01-4bdf-a6fa-2cd69aa19b07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.535 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.536 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.536 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[08559ff5-63ed-45dc-8e5f-09f9be07a617]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.537 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[98e4bda9-59fa-4b3d-85ad-368faaefe491]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 systemd-machined[196379]: New machine qemu-45-instance-0000006c.
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.550 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f5564-2db7-4a15-8c90-17eed8c13671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 systemd[1]: Started Virtual Machine qemu-45-instance-0000006c.
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.574 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2d7c0e-f4bf-4deb-a0c7-671b44147b84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.604 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[95cf061c-3cf3-41af-b339-faeeaf5be9cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.610 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0eea61-2728-4591-822f-bda10d4a1993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.6113] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.644 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5b5998-46d2-47a6-93d9-3bf1a1e80153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.648 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd7ad2b-00a5-43e2-859f-5dc2e73d8049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.6737] device (tap58fd104d-40): carrier: link connected
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.681 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd411ed-e3ee-4659-a092-305d4ae41a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.700 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa1283f-1770-4994-8718-e4443a8fd688]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734336, 'reachable_time': 31159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279589, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.718 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[64f030a0-016d-4a23-938f-a7ed6bad2152]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734336, 'tstamp': 734336}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279590, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.739 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ca74028f-98d5-4458-9fd1-555dee4acf0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734336, 'reachable_time': 31159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279591, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.768 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e82e0f-1a6b-477a-9030-30b98f041e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.825 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f71dbc9d-e8b3-4344-8e8c-d43a713e99bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.826 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.826 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.827 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.828 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:18:17 np0005539552 NetworkManager[48926]: <info>  [1764404297.8307] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.832 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.833 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:17Z|00470|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:17 np0005539552 nova_compute[233724]: 2025-11-29 08:18:17.846 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.847 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.848 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d4c302-024f-48a7-b374-301280ddee70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.849 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:17.850 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:17.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.112 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 258dfc76-0ea9-4521-a3fc-5d64b3632451 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.113 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404298.112572, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.113 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.115 233728 DEBUG nova.compute.manager [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.118 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance rebooted successfully.#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.119 233728 DEBUG nova.compute.manager [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.129 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.133 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.158 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.159 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404298.1133785, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.159 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.165 233728 DEBUG oslo_concurrency.lockutils [None req-b42e0ec4-26e7-4276-924d-669f8ffb02c2 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.181 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:18 np0005539552 nova_compute[233724]: 2025-11-29 08:18:18.184 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:18 np0005539552 podman[279666]: 2025-11-29 08:18:18.235736777 +0000 UTC m=+0.045163474 container create 6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:18 np0005539552 systemd[1]: Started libpod-conmon-6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc.scope.
Nov 29 03:18:18 np0005539552 podman[279666]: 2025-11-29 08:18:18.212260527 +0000 UTC m=+0.021687244 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:18 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:18:18 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ded47c045e36b10ccd4032f7c8c0ba08533b4ed43ea63857c4b8f3ae6106d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:18 np0005539552 podman[279666]: 2025-11-29 08:18:18.339249158 +0000 UTC m=+0.148675905 container init 6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:18 np0005539552 podman[279666]: 2025-11-29 08:18:18.345853326 +0000 UTC m=+0.155280033 container start 6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:18 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [NOTICE]   (279685) : New worker (279687) forked
Nov 29 03:18:18 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [NOTICE]   (279685) : Loading success.
Nov 29 03:18:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:19.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.505 233728 DEBUG nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.506 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.507 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.507 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.508 233728 DEBUG nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.508 233728 WARNING nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.508 233728 DEBUG nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.509 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.509 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.509 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.510 233728 DEBUG nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.510 233728 WARNING nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.511 233728 DEBUG nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.511 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.511 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.512 233728 DEBUG oslo_concurrency.lockutils [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.512 233728 DEBUG nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:19 np0005539552 nova_compute[233724]: 2025-11-29 08:18:19.512 233728 WARNING nova.compute.manager [req-7ef5d2d4-47e7-40be-9470-a54c44a2ae99 req-aa9923f0-b199-455a-a772-77831cd8f9fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:20 np0005539552 nova_compute[233724]: 2025-11-29 08:18:20.355 233728 INFO nova.compute.manager [None req-3d13bcf6-1d75-4c22-9bb1-c5b48c38e77e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Get console output#033[00m
Nov 29 03:18:20 np0005539552 nova_compute[233724]: 2025-11-29 08:18:20.364 233728 INFO oslo.privsep.daemon [None req-3d13bcf6-1d75-4c22-9bb1-c5b48c38e77e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpteq8r9r5/privsep.sock']#033[00m
Nov 29 03:18:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:20Z|00471|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:20 np0005539552 nova_compute[233724]: 2025-11-29 08:18:20.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:20.624 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:20.626 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:20.627 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:21 np0005539552 nova_compute[233724]: 2025-11-29 08:18:21.128 233728 INFO oslo.privsep.daemon [None req-3d13bcf6-1d75-4c22-9bb1-c5b48c38e77e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 03:18:21 np0005539552 nova_compute[233724]: 2025-11-29 08:18:21.003 279702 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 03:18:21 np0005539552 nova_compute[233724]: 2025-11-29 08:18:21.010 279702 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 03:18:21 np0005539552 nova_compute[233724]: 2025-11-29 08:18:21.014 279702 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 03:18:21 np0005539552 nova_compute[233724]: 2025-11-29 08:18:21.014 279702 INFO oslo.privsep.daemon [-] privsep daemon running as pid 279702#033[00m
Nov 29 03:18:21 np0005539552 nova_compute[233724]: 2025-11-29 08:18:21.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:21.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:22 np0005539552 nova_compute[233724]: 2025-11-29 08:18:22.489 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:23.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:25 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:25.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:25 np0005539552 nova_compute[233724]: 2025-11-29 08:18:25.742 233728 DEBUG oslo_concurrency.lockutils [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:25 np0005539552 nova_compute[233724]: 2025-11-29 08:18:25.744 233728 DEBUG oslo_concurrency.lockutils [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:25 np0005539552 nova_compute[233724]: 2025-11-29 08:18:25.744 233728 DEBUG nova.compute.manager [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:25 np0005539552 nova_compute[233724]: 2025-11-29 08:18:25.748 233728 DEBUG nova.compute.manager [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:18:25 np0005539552 nova_compute[233724]: 2025-11-29 08:18:25.749 233728 DEBUG nova.objects.instance [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:25 np0005539552 nova_compute[233724]: 2025-11-29 08:18:25.774 233728 DEBUG nova.virt.libvirt.driver [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:18:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:25.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.162 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.943 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:18:26 np0005539552 nova_compute[233724]: 2025-11-29 08:18:26.943 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:27Z|00472|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/175405999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.403 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:27.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.445 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.483 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.483 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.651 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.654 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4263MB free_disk=20.897159576416016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.654 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.654 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.771 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 258dfc76-0ea9-4521-a3fc-5d64b3632451 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.771 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.771 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:18:27 np0005539552 nova_compute[233724]: 2025-11-29 08:18:27.841 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:27.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2930210643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:28 np0005539552 nova_compute[233724]: 2025-11-29 08:18:28.278 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:28 np0005539552 nova_compute[233724]: 2025-11-29 08:18:28.282 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:28 np0005539552 nova_compute[233724]: 2025-11-29 08:18:28.300 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:28 np0005539552 nova_compute[233724]: 2025-11-29 08:18:28.329 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:18:28 np0005539552 nova_compute[233724]: 2025-11-29 08:18:28.329 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:29.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 03:18:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:31Z|00473|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:31Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:18:31 np0005539552 nova_compute[233724]: 2025-11-29 08:18:31.063 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:31 np0005539552 nova_compute[233724]: 2025-11-29 08:18:31.163 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:31.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:31.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:32 np0005539552 nova_compute[233724]: 2025-11-29 08:18:32.330 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:32 np0005539552 nova_compute[233724]: 2025-11-29 08:18:32.527 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:32 np0005539552 nova_compute[233724]: 2025-11-29 08:18:32.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:32 np0005539552 nova_compute[233724]: 2025-11-29 08:18:32.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:32 np0005539552 nova_compute[233724]: 2025-11-29 08:18:32.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:32 np0005539552 nova_compute[233724]: 2025-11-29 08:18:32.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:18:32 np0005539552 podman[279936]: 2025-11-29 08:18:32.967116269 +0000 UTC m=+0.057295850 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:32 np0005539552 podman[279935]: 2025-11-29 08:18:32.975149765 +0000 UTC m=+0.066783615 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:33 np0005539552 podman[279937]: 2025-11-29 08:18:33.002430717 +0000 UTC m=+0.091681563 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:33.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 03:18:33 np0005539552 nova_compute[233724]: 2025-11-29 08:18:33.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:33 np0005539552 nova_compute[233724]: 2025-11-29 08:18:33.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:18:33 np0005539552 nova_compute[233724]: 2025-11-29 08:18:33.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:33 np0005539552 nova_compute[233724]: 2025-11-29 08:18:33.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:33 np0005539552 nova_compute[233724]: 2025-11-29 08:18:33.950 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:18:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:33.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:35.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539552 nova_compute[233724]: 2025-11-29 08:18:35.505 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:35 np0005539552 nova_compute[233724]: 2025-11-29 08:18:35.542 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:35 np0005539552 nova_compute[233724]: 2025-11-29 08:18:35.542 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:18:35 np0005539552 nova_compute[233724]: 2025-11-29 08:18:35.543 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:35 np0005539552 nova_compute[233724]: 2025-11-29 08:18:35.816 233728 DEBUG nova.virt.libvirt.driver [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:18:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:36 np0005539552 nova_compute[233724]: 2025-11-29 08:18:36.207 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:36 np0005539552 nova_compute[233724]: 2025-11-29 08:18:36.537 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:36 np0005539552 nova_compute[233724]: 2025-11-29 08:18:36.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 03:18:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:37.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:37 np0005539552 nova_compute[233724]: 2025-11-29 08:18:37.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:37 np0005539552 nova_compute[233724]: 2025-11-29 08:18:37.865 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2068386756' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2068386756' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:39.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:39 np0005539552 kernel: tap524180cf-27 (unregistering): left promiscuous mode
Nov 29 03:18:39 np0005539552 NetworkManager[48926]: <info>  [1764404319.5741] device (tap524180cf-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.582 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:39Z|00474|binding|INFO|Releasing lport 524180cf-279c-48d6-8bf1-04f8f159aef6 from this chassis (sb_readonly=0)
Nov 29 03:18:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:39Z|00475|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 down in Southbound
Nov 29 03:18:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:39Z|00476|binding|INFO|Removing iface tap524180cf-27 ovn-installed in OVS
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.599 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.600 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.601 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.603 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[25a424cf-56ad-4af4-85c5-85905e1bba84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.604 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.607 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:39 np0005539552 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:18:39 np0005539552 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006c.scope: Consumed 14.536s CPU time.
Nov 29 03:18:39 np0005539552 systemd-machined[196379]: Machine qemu-45-instance-0000006c terminated.
Nov 29 03:18:39 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [NOTICE]   (279685) : haproxy version is 2.8.14-c23fe91
Nov 29 03:18:39 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [NOTICE]   (279685) : path to executable is /usr/sbin/haproxy
Nov 29 03:18:39 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [WARNING]  (279685) : Exiting Master process...
Nov 29 03:18:39 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [ALERT]    (279685) : Current worker (279687) exited with code 143 (Terminated)
Nov 29 03:18:39 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[279681]: [WARNING]  (279685) : All workers exited. Exiting... (0)
Nov 29 03:18:39 np0005539552 systemd[1]: libpod-6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc.scope: Deactivated successfully.
Nov 29 03:18:39 np0005539552 podman[280077]: 2025-11-29 08:18:39.748293106 +0000 UTC m=+0.046651774 container died 6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc-userdata-shm.mount: Deactivated successfully.
Nov 29 03:18:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d1ded47c045e36b10ccd4032f7c8c0ba08533b4ed43ea63857c4b8f3ae6106d2-merged.mount: Deactivated successfully.
Nov 29 03:18:39 np0005539552 podman[280077]: 2025-11-29 08:18:39.792181895 +0000 UTC m=+0.090540563 container cleanup 6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:18:39 np0005539552 systemd[1]: libpod-conmon-6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc.scope: Deactivated successfully.
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.834 233728 INFO nova.virt.libvirt.driver [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.840 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance destroyed successfully.#033[00m
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.840 233728 DEBUG nova.objects.instance [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'numa_topology' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:39 np0005539552 podman[280110]: 2025-11-29 08:18:39.857995593 +0000 UTC m=+0.045095592 container remove 6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.861 233728 DEBUG nova.compute.manager [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.864 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaefca8-c61d-487c-bb85-4408002e1ba7]: (4, ('Sat Nov 29 08:18:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc)\n6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc\nSat Nov 29 08:18:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc)\n6ce2d11df7925b43089ec7bd3b6b2279581a5aec89da2cb90271ef86f392dbfc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.866 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9aae2222-9f59-4518-aba8-bde4141a8de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.866 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.920 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:39 np0005539552 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.926 233728 DEBUG oslo_concurrency.lockutils [None req-83c17e1f-c8c2-463f-8dc2-3e04a317aabf 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:39 np0005539552 nova_compute[233724]: 2025-11-29 08:18:39.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.943 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc9cc37-3afa-4794-87b9-48decc884c39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.956 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4678f98f-7c22-4688-996b-f598a3f568a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.957 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1ae0b0-8b91-44db-8654-ead54735c060]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.974 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ae5391-09ca-4f90-b4f4-0733ad248621]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734328, 'reachable_time': 28877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280137, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.976 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:18:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:39.977 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[d9350235-ddba-4333-953b-8739ea00f195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:39 np0005539552 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:18:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.368 233728 DEBUG nova.compute.manager [req-c1075612-e639-4aa5-9ed6-be71e6b0d511 req-97e17853-dfdd-4f63-887c-8318276fef06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.369 233728 DEBUG oslo_concurrency.lockutils [req-c1075612-e639-4aa5-9ed6-be71e6b0d511 req-97e17853-dfdd-4f63-887c-8318276fef06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.369 233728 DEBUG oslo_concurrency.lockutils [req-c1075612-e639-4aa5-9ed6-be71e6b0d511 req-97e17853-dfdd-4f63-887c-8318276fef06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.369 233728 DEBUG oslo_concurrency.lockutils [req-c1075612-e639-4aa5-9ed6-be71e6b0d511 req-97e17853-dfdd-4f63-887c-8318276fef06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.369 233728 DEBUG nova.compute.manager [req-c1075612-e639-4aa5-9ed6-be71e6b0d511 req-97e17853-dfdd-4f63-887c-8318276fef06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.369 233728 WARNING nova.compute.manager [req-c1075612-e639-4aa5-9ed6-be71e6b0d511 req-97e17853-dfdd-4f63-887c-8318276fef06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:18:40 np0005539552 nova_compute[233724]: 2025-11-29 08:18:40.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:41 np0005539552 nova_compute[233724]: 2025-11-29 08:18:41.209 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:41.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:42.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.207 233728 DEBUG nova.objects.instance [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.228 233728 DEBUG oslo_concurrency.lockutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.228 233728 DEBUG oslo_concurrency.lockutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.229 233728 DEBUG nova.network.neutron [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.229 233728 DEBUG nova.objects.instance [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'info_cache' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.512 233728 DEBUG nova.compute.manager [req-cfdf53c4-4447-4995-b6b0-b318e77d75ee req-de93eb69-f513-4eb5-a8f2-a77b487e1b0e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.513 233728 DEBUG oslo_concurrency.lockutils [req-cfdf53c4-4447-4995-b6b0-b318e77d75ee req-de93eb69-f513-4eb5-a8f2-a77b487e1b0e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.513 233728 DEBUG oslo_concurrency.lockutils [req-cfdf53c4-4447-4995-b6b0-b318e77d75ee req-de93eb69-f513-4eb5-a8f2-a77b487e1b0e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.513 233728 DEBUG oslo_concurrency.lockutils [req-cfdf53c4-4447-4995-b6b0-b318e77d75ee req-de93eb69-f513-4eb5-a8f2-a77b487e1b0e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.513 233728 DEBUG nova.compute.manager [req-cfdf53c4-4447-4995-b6b0-b318e77d75ee req-de93eb69-f513-4eb5-a8f2-a77b487e1b0e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.513 233728 WARNING nova.compute.manager [req-cfdf53c4-4447-4995-b6b0-b318e77d75ee req-de93eb69-f513-4eb5-a8f2-a77b487e1b0e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:18:42 np0005539552 nova_compute[233724]: 2025-11-29 08:18:42.578 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:43 np0005539552 nova_compute[233724]: 2025-11-29 08:18:43.234 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:43.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.182 233728 DEBUG nova.network.neutron [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.211 233728 DEBUG oslo_concurrency.lockutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.237 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance destroyed successfully.#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.238 233728 DEBUG nova.objects.instance [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'numa_topology' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.250 233728 DEBUG nova.objects.instance [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.265 233728 DEBUG nova.virt.libvirt.vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.266 233728 DEBUG nova.network.os_vif_util [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.267 233728 DEBUG nova.network.os_vif_util [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.267 233728 DEBUG os_vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.269 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.270 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap524180cf-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.272 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.275 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.278 233728 INFO os_vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.286 233728 DEBUG nova.virt.libvirt.driver [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start _get_guest_xml network_info=[{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.290 233728 WARNING nova.virt.libvirt.driver [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.296 233728 DEBUG nova.virt.libvirt.host [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.297 233728 DEBUG nova.virt.libvirt.host [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.301 233728 DEBUG nova.virt.libvirt.host [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.301 233728 DEBUG nova.virt.libvirt.host [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.302 233728 DEBUG nova.virt.libvirt.driver [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.303 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.304 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.304 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.304 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.304 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.305 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.305 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.305 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.306 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.306 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.306 233728 DEBUG nova.virt.hardware [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.307 233728 DEBUG nova.objects.instance [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.329 233728 DEBUG oslo_concurrency.processutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1874194647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.827 233728 DEBUG oslo_concurrency.processutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:44 np0005539552 nova_compute[233724]: 2025-11-29 08:18:44.870 233728 DEBUG oslo_concurrency.processutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670153958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670153958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1846045109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.399 233728 DEBUG oslo_concurrency.processutils [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.403 233728 DEBUG nova.virt.libvirt.vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.404 233728 DEBUG nova.network.os_vif_util [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.405 233728 DEBUG nova.network.os_vif_util [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.407 233728 DEBUG nova.objects.instance [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.426 233728 DEBUG nova.virt.libvirt.driver [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <uuid>258dfc76-0ea9-4521-a3fc-5d64b3632451</uuid>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <name>instance-0000006c</name>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestJSON-server-1950416616</nova:name>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:18:44</nova:creationTime>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <nova:port uuid="524180cf-279c-48d6-8bf1-04f8f159aef6">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <entry name="serial">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <entry name="uuid">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b8:49:96"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <target dev="tap524180cf-27"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log" append="off"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:18:45 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:18:45 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:18:45 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:18:45 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.430 233728 DEBUG nova.virt.libvirt.driver [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.430 233728 DEBUG nova.virt.libvirt.driver [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.431 233728 DEBUG nova.virt.libvirt.vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.432 233728 DEBUG nova.network.os_vif_util [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.433 233728 DEBUG nova.network.os_vif_util [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.434 233728 DEBUG os_vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.434 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.435 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.436 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.439 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.440 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap524180cf-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.441 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap524180cf-27, col_values=(('external_ids', {'iface-id': '524180cf-279c-48d6-8bf1-04f8f159aef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:49:96', 'vm-uuid': '258dfc76-0ea9-4521-a3fc-5d64b3632451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.443 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.4446] manager: (tap524180cf-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.446 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.449 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.450 233728 INFO os_vif [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:18:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:45.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:45 np0005539552 kernel: tap524180cf-27: entered promiscuous mode
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.5125] manager: (tap524180cf-27): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:45Z|00477|binding|INFO|Claiming lport 524180cf-279c-48d6-8bf1-04f8f159aef6 for this chassis.
Nov 29 03:18:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:45Z|00478|binding|INFO|524180cf-279c-48d6-8bf1-04f8f159aef6: Claiming fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.519 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.520 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.521 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:18:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:45Z|00479|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 ovn-installed in OVS
Nov 29 03:18:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:45Z|00480|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 up in Southbound
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.529 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.531 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.533 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a49df622-bc20-4d22-8c68-579e449693f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.533 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.535 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.535 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.535 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[81b1b239-35c1-41b0-bc80-a38e26b2af2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.536 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ce709dba-f7dc-40ca-b42b-788254fc0f93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 systemd-udevd[280218]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.548 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a14cb2-48ad-488e-9512-3d3f83b31867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.5525] device (tap524180cf-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.5533] device (tap524180cf-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:45 np0005539552 systemd-machined[196379]: New machine qemu-46-instance-0000006c.
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.561 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b70de3a0-d741-4970-bcb8-24a7e9397bb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 systemd[1]: Started Virtual Machine qemu-46-instance-0000006c.
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.586 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[92db8ca3-c245-4d22-b32f-f3e83615fffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.591 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[17db9d68-42c8-4f21-a01c-22e4668db5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.5938] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/224)
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.624 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d29d131d-a6fa-4dc9-ab3c-3327d8cc8198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.627 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c0d205-23f9-434f-a783-291a2b5ef4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.6529] device (tap58fd104d-40): carrier: link connected
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.659 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd9bb63-c8a2-492e-91e1-59652cb47b85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.676 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[19139036-bf3e-4b58-bfc2-4e6437155794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737134, 'reachable_time': 16273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280253, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.694 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c510f47-8bd6-4b26-9069-5a35a1a5dff6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737134, 'tstamp': 737134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280254, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.714 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[35bbe890-2499-408a-acf1-1dfcc0d8bc3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737134, 'reachable_time': 16273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280255, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.747 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aae6121a-4d0c-4622-ba0d-d869c2e73397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.807 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d27a492d-cd99-49b9-a9e5-ba1d650ebbe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.808 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.809 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.809 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:45 np0005539552 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.811 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 NetworkManager[48926]: <info>  [1764404325.8119] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.817 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.818 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:45Z|00481|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:45 np0005539552 nova_compute[233724]: 2025-11-29 08:18:45.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.832 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.833 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5043094-4188-4020-8a1a-bc636a1945f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.834 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:18:45.834 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:46 np0005539552 podman[280323]: 2025-11-29 08:18:46.19842167 +0000 UTC m=+0.051249178 container create 4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.211 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:46 np0005539552 systemd[1]: Started libpod-conmon-4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce.scope.
Nov 29 03:18:46 np0005539552 podman[280323]: 2025-11-29 08:18:46.172178485 +0000 UTC m=+0.025006013 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:46 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:18:46 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041bd1fd1338f04652cc51ac8df981cb2f4f35930d8638f0afdc4443e958ed9b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:46 np0005539552 podman[280323]: 2025-11-29 08:18:46.287057851 +0000 UTC m=+0.139885379 container init 4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:18:46 np0005539552 podman[280323]: 2025-11-29 08:18:46.29410113 +0000 UTC m=+0.146928638 container start 4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:18:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [NOTICE]   (280342) : New worker (280344) forked
Nov 29 03:18:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [NOTICE]   (280342) : Loading success.
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.493 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 258dfc76-0ea9-4521-a3fc-5d64b3632451 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.495 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404326.49315, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.495 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.498 233728 DEBUG nova.compute.manager [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.504 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance rebooted successfully.#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.505 233728 DEBUG nova.compute.manager [None req-34c8c01c-6b98-454e-98bc-a7efd35242b9 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.531 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.537 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.566 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.566 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404326.4944973, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.566 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.593 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:46 np0005539552 nova_compute[233724]: 2025-11-29 08:18:46.600 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:47 np0005539552 nova_compute[233724]: 2025-11-29 08:18:47.441 233728 DEBUG nova.compute.manager [req-a8c5e768-af66-40d9-a390-15628b903683 req-07d18f06-14d2-4d66-aea1-0e314dfe1ffc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:47 np0005539552 nova_compute[233724]: 2025-11-29 08:18:47.442 233728 DEBUG oslo_concurrency.lockutils [req-a8c5e768-af66-40d9-a390-15628b903683 req-07d18f06-14d2-4d66-aea1-0e314dfe1ffc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:47 np0005539552 nova_compute[233724]: 2025-11-29 08:18:47.442 233728 DEBUG oslo_concurrency.lockutils [req-a8c5e768-af66-40d9-a390-15628b903683 req-07d18f06-14d2-4d66-aea1-0e314dfe1ffc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:47 np0005539552 nova_compute[233724]: 2025-11-29 08:18:47.442 233728 DEBUG oslo_concurrency.lockutils [req-a8c5e768-af66-40d9-a390-15628b903683 req-07d18f06-14d2-4d66-aea1-0e314dfe1ffc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:47 np0005539552 nova_compute[233724]: 2025-11-29 08:18:47.443 233728 DEBUG nova.compute.manager [req-a8c5e768-af66-40d9-a390-15628b903683 req-07d18f06-14d2-4d66-aea1-0e314dfe1ffc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:47 np0005539552 nova_compute[233724]: 2025-11-29 08:18:47.443 233728 WARNING nova.compute.manager [req-a8c5e768-af66-40d9-a390-15628b903683 req-07d18f06-14d2-4d66-aea1-0e314dfe1ffc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:47.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:48.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:49.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:49 np0005539552 nova_compute[233724]: 2025-11-29 08:18:49.523 233728 DEBUG nova.compute.manager [req-dcbbdfa8-93ff-4a4b-9f6b-509fa89ca88c req-6a7861fa-6d10-44ed-be0c-5e4d1d33d1e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:49 np0005539552 nova_compute[233724]: 2025-11-29 08:18:49.523 233728 DEBUG oslo_concurrency.lockutils [req-dcbbdfa8-93ff-4a4b-9f6b-509fa89ca88c req-6a7861fa-6d10-44ed-be0c-5e4d1d33d1e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:49 np0005539552 nova_compute[233724]: 2025-11-29 08:18:49.524 233728 DEBUG oslo_concurrency.lockutils [req-dcbbdfa8-93ff-4a4b-9f6b-509fa89ca88c req-6a7861fa-6d10-44ed-be0c-5e4d1d33d1e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:49 np0005539552 nova_compute[233724]: 2025-11-29 08:18:49.524 233728 DEBUG oslo_concurrency.lockutils [req-dcbbdfa8-93ff-4a4b-9f6b-509fa89ca88c req-6a7861fa-6d10-44ed-be0c-5e4d1d33d1e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:49 np0005539552 nova_compute[233724]: 2025-11-29 08:18:49.524 233728 DEBUG nova.compute.manager [req-dcbbdfa8-93ff-4a4b-9f6b-509fa89ca88c req-6a7861fa-6d10-44ed-be0c-5e4d1d33d1e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:49 np0005539552 nova_compute[233724]: 2025-11-29 08:18:49.524 233728 WARNING nova.compute.manager [req-dcbbdfa8-93ff-4a4b-9f6b-509fa89ca88c req-6a7861fa-6d10-44ed-be0c-5e4d1d33d1e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:50.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.439 233728 INFO nova.compute.manager [None req-410bd3d0-20b8-45ab-bd27-4a71daa72a38 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Pausing#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.440 233728 DEBUG nova.objects.instance [None req-410bd3d0-20b8-45ab-bd27-4a71daa72a38 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.444 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.466 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404330.4660883, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.466 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.468 233728 DEBUG nova.compute.manager [None req-410bd3d0-20b8-45ab-bd27-4a71daa72a38 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.496 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.500 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:50 np0005539552 nova_compute[233724]: 2025-11-29 08:18:50.530 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:18:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3593754662' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3593754662' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:51 np0005539552 nova_compute[233724]: 2025-11-29 08:18:51.249 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:51.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:52.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.500 233728 INFO nova.compute.manager [None req-b1cc271b-19f5-4bc6-9429-edfc5974125c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Unpausing#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.501 233728 DEBUG nova.objects.instance [None req-b1cc271b-19f5-4bc6-9429-edfc5974125c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.532 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404332.5318377, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.532 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:52 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.536 233728 DEBUG nova.virt.libvirt.guest [None req-b1cc271b-19f5-4bc6-9429-edfc5974125c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.536 233728 DEBUG nova.compute.manager [None req-b1cc271b-19f5-4bc6-9429-edfc5974125c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.564 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.568 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:52 np0005539552 nova_compute[233724]: 2025-11-29 08:18:52.598 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 29 03:18:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1866180322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1866180322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:53.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:18:53Z|00482|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:18:53 np0005539552 nova_compute[233724]: 2025-11-29 08:18:53.702 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:54.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:18:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234143537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:18:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:18:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234143537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:18:55 np0005539552 nova_compute[233724]: 2025-11-29 08:18:55.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:55.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:56.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:56 np0005539552 nova_compute[233724]: 2025-11-29 08:18:56.302 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:57.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:18:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:59.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:00.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:00 np0005539552 nova_compute[233724]: 2025-11-29 08:19:00.450 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:01 np0005539552 nova_compute[233724]: 2025-11-29 08:19:01.304 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:01.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:02.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:02 np0005539552 nova_compute[233724]: 2025-11-29 08:19:02.758 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:02Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:19:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:03.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:04 np0005539552 podman[280425]: 2025-11-29 08:19:04.004737675 +0000 UTC m=+0.079764284 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 03:19:04 np0005539552 podman[280426]: 2025-11-29 08:19:04.033527989 +0000 UTC m=+0.097719287 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:19:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:04 np0005539552 podman[280427]: 2025-11-29 08:19:04.114635508 +0000 UTC m=+0.174589412 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:19:05 np0005539552 nova_compute[233724]: 2025-11-29 08:19:05.454 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:05.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:06 np0005539552 nova_compute[233724]: 2025-11-29 08:19:06.305 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:06 np0005539552 nova_compute[233724]: 2025-11-29 08:19:06.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:07.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:08.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:09.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:09Z|00483|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:19:09 np0005539552 nova_compute[233724]: 2025-11-29 08:19:09.759 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:10.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:10 np0005539552 nova_compute[233724]: 2025-11-29 08:19:10.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539552 nova_compute[233724]: 2025-11-29 08:19:10.567 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:10.568 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:10.569 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:19:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:10.571 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:11 np0005539552 nova_compute[233724]: 2025-11-29 08:19:11.307 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:11.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:11 np0005539552 nova_compute[233724]: 2025-11-29 08:19:11.622 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:12 np0005539552 nova_compute[233724]: 2025-11-29 08:19:12.244 233728 DEBUG oslo_concurrency.lockutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:12 np0005539552 nova_compute[233724]: 2025-11-29 08:19:12.244 233728 DEBUG oslo_concurrency.lockutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:12 np0005539552 nova_compute[233724]: 2025-11-29 08:19:12.244 233728 INFO nova.compute.manager [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Rebooting instance#033[00m
Nov 29 03:19:12 np0005539552 nova_compute[233724]: 2025-11-29 08:19:12.260 233728 DEBUG oslo_concurrency.lockutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:12 np0005539552 nova_compute[233724]: 2025-11-29 08:19:12.261 233728 DEBUG oslo_concurrency.lockutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:12 np0005539552 nova_compute[233724]: 2025-11-29 08:19:12.261 233728 DEBUG nova.network.neutron [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:13.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:14.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.093 233728 DEBUG nova.network.neutron [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.116 233728 DEBUG oslo_concurrency.lockutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.117 233728 DEBUG nova.compute.manager [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:14 np0005539552 kernel: tap524180cf-27 (unregistering): left promiscuous mode
Nov 29 03:19:14 np0005539552 NetworkManager[48926]: <info>  [1764404354.2956] device (tap524180cf-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.306 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:14Z|00484|binding|INFO|Releasing lport 524180cf-279c-48d6-8bf1-04f8f159aef6 from this chassis (sb_readonly=0)
Nov 29 03:19:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:14Z|00485|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 down in Southbound
Nov 29 03:19:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:14Z|00486|binding|INFO|Removing iface tap524180cf-27 ovn-installed in OVS
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.308 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.340 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:19:14 np0005539552 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006c.scope: Consumed 15.149s CPU time.
Nov 29 03:19:14 np0005539552 systemd-machined[196379]: Machine qemu-46-instance-0000006c terminated.
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.462 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance destroyed successfully.#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.463 233728 DEBUG nova.objects.instance [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.464 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.465 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.466 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.467 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fba321-934d-4c5d-be4f-7a31f8309c1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.468 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.475 233728 DEBUG nova.virt.libvirt.vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.475 233728 DEBUG nova.network.os_vif_util [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.476 233728 DEBUG nova.network.os_vif_util [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.477 233728 DEBUG os_vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.479 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap524180cf-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.481 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.485 233728 INFO os_vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.492 233728 DEBUG nova.virt.libvirt.driver [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start _get_guest_xml network_info=[{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.496 233728 WARNING nova.virt.libvirt.driver [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.500 233728 DEBUG nova.virt.libvirt.host [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.501 233728 DEBUG nova.virt.libvirt.host [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.505 233728 DEBUG nova.virt.libvirt.host [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.505 233728 DEBUG nova.virt.libvirt.host [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.508 233728 DEBUG nova.virt.libvirt.driver [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.508 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.509 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.509 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.509 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.510 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.510 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.510 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.511 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.511 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.511 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.512 233728 DEBUG nova.virt.hardware [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.512 233728 DEBUG nova.objects.instance [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.530 233728 DEBUG oslo_concurrency.processutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:14 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [NOTICE]   (280342) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:14 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [NOTICE]   (280342) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:14 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [WARNING]  (280342) : Exiting Master process...
Nov 29 03:19:14 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [ALERT]    (280342) : Current worker (280344) exited with code 143 (Terminated)
Nov 29 03:19:14 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280338]: [WARNING]  (280342) : All workers exited. Exiting... (0)
Nov 29 03:19:14 np0005539552 systemd[1]: libpod-4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce.scope: Deactivated successfully.
Nov 29 03:19:14 np0005539552 podman[280531]: 2025-11-29 08:19:14.603272069 +0000 UTC m=+0.048059982 container died 4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:19:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay-041bd1fd1338f04652cc51ac8df981cb2f4f35930d8638f0afdc4443e958ed9b-merged.mount: Deactivated successfully.
Nov 29 03:19:14 np0005539552 podman[280531]: 2025-11-29 08:19:14.64386122 +0000 UTC m=+0.088649123 container cleanup 4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:14 np0005539552 systemd[1]: libpod-conmon-4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce.scope: Deactivated successfully.
Nov 29 03:19:14 np0005539552 podman[280563]: 2025-11-29 08:19:14.718048993 +0000 UTC m=+0.049333606 container remove 4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.725 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e502175c-4294-433f-94b0-1cc0fbb17f5b]: (4, ('Sat Nov 29 08:19:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce)\n4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce\nSat Nov 29 08:19:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce)\n4cb8d1f21e79724c1c21e260d4f994c428605bcfbd18acdcdc9998e90186c9ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.727 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9856d4ef-84e6-4d82-ac26-afc29eb5bad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.728 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.730 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.736 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.738 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7a22b47a-82c6-49bf-b436-15f10925e554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.756 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb998f6-cf18-49c4-a7ef-1a5c903dac46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.757 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[56770234-ed81-4bcb-ab89-2ffcd0c1c9fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.759 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.775 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b6027733-6c06-4035-b090-a40a3ef7cd3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737127, 'reachable_time': 40465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280596, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.777 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:14.778 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b15e4002-be9f-40d2-a32d-aa4a666364ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3921117887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:14 np0005539552 nova_compute[233724]: 2025-11-29 08:19:14.969 233728 DEBUG oslo_concurrency.processutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.008 233728 DEBUG nova.compute.manager [req-4ffdf84a-6b77-41b0-a6bc-cd4d791a0911 req-3e55bc84-71ad-43b8-a95a-633cd2a9f721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.008 233728 DEBUG oslo_concurrency.lockutils [req-4ffdf84a-6b77-41b0-a6bc-cd4d791a0911 req-3e55bc84-71ad-43b8-a95a-633cd2a9f721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.009 233728 DEBUG oslo_concurrency.lockutils [req-4ffdf84a-6b77-41b0-a6bc-cd4d791a0911 req-3e55bc84-71ad-43b8-a95a-633cd2a9f721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.009 233728 DEBUG oslo_concurrency.lockutils [req-4ffdf84a-6b77-41b0-a6bc-cd4d791a0911 req-3e55bc84-71ad-43b8-a95a-633cd2a9f721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.009 233728 DEBUG nova.compute.manager [req-4ffdf84a-6b77-41b0-a6bc-cd4d791a0911 req-3e55bc84-71ad-43b8-a95a-633cd2a9f721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.010 233728 WARNING nova.compute.manager [req-4ffdf84a-6b77-41b0-a6bc-cd4d791a0911 req-3e55bc84-71ad-43b8-a95a-633cd2a9f721 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.014 233728 DEBUG oslo_concurrency.processutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3184862710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.486 233728 DEBUG oslo_concurrency.processutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.487 233728 DEBUG nova.virt.libvirt.vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.488 233728 DEBUG nova.network.os_vif_util [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.489 233728 DEBUG nova.network.os_vif_util [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.490 233728 DEBUG nova.objects.instance [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:15.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.507 233728 DEBUG nova.virt.libvirt.driver [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <uuid>258dfc76-0ea9-4521-a3fc-5d64b3632451</uuid>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <name>instance-0000006c</name>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestJSON-server-1950416616</nova:name>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:19:14</nova:creationTime>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <nova:port uuid="524180cf-279c-48d6-8bf1-04f8f159aef6">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <entry name="serial">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <entry name="uuid">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b8:49:96"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <target dev="tap524180cf-27"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log" append="off"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:19:15 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:19:15 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:19:15 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:19:15 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.507 233728 DEBUG nova.virt.libvirt.driver [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.508 233728 DEBUG nova.virt.libvirt.driver [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.508 233728 DEBUG nova.virt.libvirt.vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.508 233728 DEBUG nova.network.os_vif_util [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.509 233728 DEBUG nova.network.os_vif_util [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.509 233728 DEBUG os_vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.510 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.510 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.510 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.513 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap524180cf-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.514 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap524180cf-27, col_values=(('external_ids', {'iface-id': '524180cf-279c-48d6-8bf1-04f8f159aef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:49:96', 'vm-uuid': '258dfc76-0ea9-4521-a3fc-5d64b3632451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.515 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.5169] manager: (tap524180cf-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.524 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.525 233728 INFO os_vif [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:19:15 np0005539552 systemd-udevd[280496]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:15 np0005539552 kernel: tap524180cf-27: entered promiscuous mode
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.6278] manager: (tap524180cf-27): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:15Z|00487|binding|INFO|Claiming lport 524180cf-279c-48d6-8bf1-04f8f159aef6 for this chassis.
Nov 29 03:19:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:15Z|00488|binding|INFO|524180cf-279c-48d6-8bf1-04f8f159aef6: Claiming fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.638 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.640 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.6417] device (tap524180cf-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.6437] device (tap524180cf-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:15Z|00489|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 ovn-installed in OVS
Nov 29 03:19:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:15Z|00490|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 up in Southbound
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.643 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.646 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.648 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.660 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3f636886-66f5-4a3b-be28-110fe1919312]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.662 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.663 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.663 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4cccf324-4452-42fb-b852-e6c4fb01487b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.664 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2e4a5c-adcf-4d38-bf94-34fa5d1b99be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 systemd-machined[196379]: New machine qemu-47-instance-0000006c.
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.683 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[091ee805-153b-4298-8d9a-05adbcee1051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 systemd[1]: Started Virtual Machine qemu-47-instance-0000006c.
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.713 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[168bc95a-f1ed-417b-9439-053651b2b544]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.755 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[00e374d4-a905-4a99-bcb7-07ce0a29b5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.7649] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.764 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8b99d37c-6cef-4e5a-8ab4-44d68b44a539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.798 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[720f526a-30a2-44ba-8584-17b8aed0ec6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.801 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b84cde4f-7c71-4ed2-ae62-38a8c3b9b6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.8241] device (tap58fd104d-40): carrier: link connected
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.833 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9a325c-1016-49a7-bd64-402723340174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.851 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26593c99-ec36-4636-97c8-a23268accbde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740151, 'reachable_time': 37925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280683, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.865 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[da3484e9-4136-4143-b860-4e4fc25c11f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 740151, 'tstamp': 740151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280684, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.879 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b20894d5-16de-4fdb-8502-8f982a56d781]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740151, 'reachable_time': 37925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280685, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.913 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[44d84354-fdea-44a4-a369-705841f3538c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.984 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d50080-1236-4cf2-837a-5e6cc29e0273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.985 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.985 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.985 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539552 NetworkManager[48926]: <info>  [1764404355.9877] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Nov 29 03:19:15 np0005539552 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.987 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 nova_compute[233724]: 2025-11-29 08:19:15.990 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:15.991 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:15Z|00491|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.011 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:16.012 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:16.013 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[733c2fe2-e04c-47a0-a69f-d2337d3b09aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:16.014 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:16.015 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:16.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.153 233728 DEBUG nova.compute.manager [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.154 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 258dfc76-0ea9-4521-a3fc-5d64b3632451 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.154 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404356.1523318, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.154 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.159 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance rebooted successfully.#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.160 233728 DEBUG nova.compute.manager [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.195 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.198 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.232 233728 DEBUG oslo_concurrency.lockutils [None req-348382ad-2d09-4df6-bc44-8337a1307f0f 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.249 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404356.1535366, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.249 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.272 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.276 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:16 np0005539552 nova_compute[233724]: 2025-11-29 08:19:16.309 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:16 np0005539552 podman[280759]: 2025-11-29 08:19:16.383777855 +0000 UTC m=+0.048354280 container create a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:16 np0005539552 systemd[1]: Started libpod-conmon-a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8.scope.
Nov 29 03:19:16 np0005539552 podman[280759]: 2025-11-29 08:19:16.359832592 +0000 UTC m=+0.024409047 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:16 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:19:16 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41967c2e3bb4c46a5e5a42fa9884f6669e59e03429fd544dfad50c07ff3d3f2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:16 np0005539552 podman[280759]: 2025-11-29 08:19:16.487914383 +0000 UTC m=+0.152490848 container init a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:19:16 np0005539552 podman[280759]: 2025-11-29 08:19:16.494175821 +0000 UTC m=+0.158752256 container start a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:19:16 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [NOTICE]   (280778) : New worker (280780) forked
Nov 29 03:19:16 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [NOTICE]   (280778) : Loading success.
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.035 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.100 233728 DEBUG nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.101 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.101 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.101 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.102 233728 DEBUG nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.102 233728 WARNING nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.102 233728 DEBUG nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.102 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.103 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.103 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.103 233728 DEBUG nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.103 233728 WARNING nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.103 233728 DEBUG nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.104 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.104 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.104 233728 DEBUG oslo_concurrency.lockutils [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.105 233728 DEBUG nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.105 233728 WARNING nova.compute.manager [req-9049383b-45e2-4435-9357-299c7908fbb8 req-a0bf81be-155c-491b-9a3e-59766be74058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:17.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:17 np0005539552 nova_compute[233724]: 2025-11-29 08:19:17.706 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:18.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:19.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:20.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:20 np0005539552 nova_compute[233724]: 2025-11-29 08:19:20.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:20.625 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:20.625 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:20.626 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:20 np0005539552 nova_compute[233724]: 2025-11-29 08:19:20.742 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539552 nova_compute[233724]: 2025-11-29 08:19:21.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:21.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:22.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:23.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:24.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561721018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.494 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.495 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.519 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.630 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.630 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.638 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.638 233728 INFO nova.compute.claims [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:19:24 np0005539552 nova_compute[233724]: 2025-11-29 08:19:24.817 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/119904201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.281 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.288 233728 DEBUG nova.compute.provider_tree [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.303 233728 DEBUG nova.scheduler.client.report [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.341 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.342 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.386 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.387 233728 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.412 233728 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.453 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.629 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.631 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.631 233728 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Creating image(s)#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.659 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.688 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.718 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.723 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.758 233728 DEBUG nova.policy [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d293467f8e498eaa87b6b8976b34d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27fd30263a7f4717b84946720a5770b5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.809 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.810 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.811 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.811 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.842 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:25 np0005539552 nova_compute[233724]: 2025-11-29 08:19:25.846 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e4597066-9505-4b87-8747-5fb38218a4ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:26.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.176 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e4597066-9505-4b87-8747-5fb38218a4ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.247 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] resizing rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.331 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.400 233728 DEBUG nova.objects.instance [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'migration_context' on Instance uuid e4597066-9505-4b87-8747-5fb38218a4ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.420 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.421 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Ensure instance console log exists: /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.422 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.422 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:26 np0005539552 nova_compute[233724]: 2025-11-29 08:19:26.423 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:27.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.949 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.949 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:27 np0005539552 nova_compute[233724]: 2025-11-29 08:19:27.975 233728 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Successfully created port: bd25bee0-38fc-455c-bf9d-67420114b821 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:28.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1797535795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.378 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.458 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.458 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.602 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.604 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4279MB free_disk=20.921844482421875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.604 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.604 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.683 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 258dfc76-0ea9-4521-a3fc-5d64b3632451 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.684 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance e4597066-9505-4b87-8747-5fb38218a4ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.685 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.685 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:19:28 np0005539552 nova_compute[233724]: 2025-11-29 08:19:28.846 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4004711868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:29 np0005539552 nova_compute[233724]: 2025-11-29 08:19:29.292 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:29 np0005539552 nova_compute[233724]: 2025-11-29 08:19:29.301 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:29 np0005539552 nova_compute[233724]: 2025-11-29 08:19:29.339 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:29 np0005539552 nova_compute[233724]: 2025-11-29 08:19:29.369 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:19:29 np0005539552 nova_compute[233724]: 2025-11-29 08:19:29.369 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:30Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:19:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:30.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:30 np0005539552 nova_compute[233724]: 2025-11-29 08:19:30.141 233728 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Successfully updated port: bd25bee0-38fc-455c-bf9d-67420114b821 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:30 np0005539552 nova_compute[233724]: 2025-11-29 08:19:30.158 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "refresh_cache-e4597066-9505-4b87-8747-5fb38218a4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:30 np0005539552 nova_compute[233724]: 2025-11-29 08:19:30.159 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquired lock "refresh_cache-e4597066-9505-4b87-8747-5fb38218a4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:30 np0005539552 nova_compute[233724]: 2025-11-29 08:19:30.159 233728 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:30 np0005539552 nova_compute[233724]: 2025-11-29 08:19:30.522 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539552 nova_compute[233724]: 2025-11-29 08:19:31.253 233728 DEBUG nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-changed-bd25bee0-38fc-455c-bf9d-67420114b821 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:31 np0005539552 nova_compute[233724]: 2025-11-29 08:19:31.254 233728 DEBUG nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Refreshing instance network info cache due to event network-changed-bd25bee0-38fc-455c-bf9d-67420114b821. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:31 np0005539552 nova_compute[233724]: 2025-11-29 08:19:31.254 233728 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e4597066-9505-4b87-8747-5fb38218a4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:31 np0005539552 nova_compute[233724]: 2025-11-29 08:19:31.267 233728 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:31 np0005539552 nova_compute[233724]: 2025-11-29 08:19:31.315 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:31 np0005539552 podman[281255]: 2025-11-29 08:19:31.434488966 +0000 UTC m=+0.069705884 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:31 np0005539552 podman[281255]: 2025-11-29 08:19:31.521992337 +0000 UTC m=+0.157209235 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:19:32 np0005539552 podman[281409]: 2025-11-29 08:19:32.090327086 +0000 UTC m=+0.071445780 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:19:32 np0005539552 podman[281409]: 2025-11-29 08:19:32.104059705 +0000 UTC m=+0.085178399 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:19:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:32.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:32 np0005539552 podman[281476]: 2025-11-29 08:19:32.469941945 +0000 UTC m=+0.178290011 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=2.2.4, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=)
Nov 29 03:19:32 np0005539552 podman[281496]: 2025-11-29 08:19:32.574940436 +0000 UTC m=+0.087860211 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=keepalived for Ceph, name=keepalived, io.buildah.version=1.28.2)
Nov 29 03:19:32 np0005539552 podman[281476]: 2025-11-29 08:19:32.581949525 +0000 UTC m=+0.290297561 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=keepalived-container, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 03:19:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:32 np0005539552 nova_compute[233724]: 2025-11-29 08:19:32.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:32 np0005539552 nova_compute[233724]: 2025-11-29 08:19:32.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:32 np0005539552 nova_compute[233724]: 2025-11-29 08:19:32.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:19:32 np0005539552 nova_compute[233724]: 2025-11-29 08:19:32.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:32 np0005539552 nova_compute[233724]: 2025-11-29 08:19:32.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:19:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:19:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:33 np0005539552 nova_compute[233724]: 2025-11-29 08:19:33.935 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:34 np0005539552 nova_compute[233724]: 2025-11-29 08:19:34.920 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:34 np0005539552 nova_compute[233724]: 2025-11-29 08:19:34.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:34 np0005539552 nova_compute[233724]: 2025-11-29 08:19:34.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:19:34 np0005539552 nova_compute[233724]: 2025-11-29 08:19:34.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:19:34 np0005539552 nova_compute[233724]: 2025-11-29 08:19:34.948 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:19:34 np0005539552 podman[281643]: 2025-11-29 08:19:34.993491414 +0000 UTC m=+0.075029857 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:19:35 np0005539552 podman[281642]: 2025-11-29 08:19:35.004056017 +0000 UTC m=+0.081753367 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 03:19:35 np0005539552 podman[281644]: 2025-11-29 08:19:35.080298156 +0000 UTC m=+0.151474341 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.099 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.099 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.099 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.100 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.167 233728 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Updating instance_info_cache with network_info: [{"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.186 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Releasing lock "refresh_cache-e4597066-9505-4b87-8747-5fb38218a4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.186 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Instance network_info: |[{"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.187 233728 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e4597066-9505-4b87-8747-5fb38218a4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.187 233728 DEBUG nova.network.neutron [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Refreshing network info cache for port bd25bee0-38fc-455c-bf9d-67420114b821 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.190 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Start _get_guest_xml network_info=[{"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.197 233728 WARNING nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.202 233728 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.203 233728 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.206 233728 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.206 233728 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.208 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.208 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.208 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.209 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.209 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.209 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.209 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.210 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.210 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.210 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.210 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.211 233728 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.214 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.523 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2935078028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.699 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.725 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:35 np0005539552 nova_compute[233724]: 2025-11-29 08:19:35.729 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:36.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1038492915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.153 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.155 233728 DEBUG nova.virt.libvirt.vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-1',id=113,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:25Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=e4597066-9505-4b87-8747-5fb38218a4ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.156 233728 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.157 233728 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.158 233728 DEBUG nova.objects.instance [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4597066-9505-4b87-8747-5fb38218a4ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.248 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <uuid>e4597066-9505-4b87-8747-5fb38218a4ed</uuid>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <name>instance-00000071</name>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1670228511-1</nova:name>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:19:35</nova:creationTime>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:user uuid="14d293467f8e498eaa87b6b8976b34d9">tempest-ListServersNegativeTestJSON-1508942438-project-member</nova:user>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:project uuid="27fd30263a7f4717b84946720a5770b5">tempest-ListServersNegativeTestJSON-1508942438</nova:project>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <nova:port uuid="bd25bee0-38fc-455c-bf9d-67420114b821">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <entry name="serial">e4597066-9505-4b87-8747-5fb38218a4ed</entry>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <entry name="uuid">e4597066-9505-4b87-8747-5fb38218a4ed</entry>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/e4597066-9505-4b87-8747-5fb38218a4ed_disk">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/e4597066-9505-4b87-8747-5fb38218a4ed_disk.config">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:15:18:d6"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <target dev="tapbd25bee0-38"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/console.log" append="off"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:19:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:19:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:19:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:19:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.250 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Preparing to wait for external event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.250 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.251 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.251 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.252 233728 DEBUG nova.virt.libvirt.vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-1',id=113,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:25Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=e4597066-9505-4b87-8747-5fb38218a4ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.252 233728 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.253 233728 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.253 233728 DEBUG os_vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.254 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.254 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.255 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.258 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.259 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd25bee0-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.259 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd25bee0-38, col_values=(('external_ids', {'iface-id': 'bd25bee0-38fc-455c-bf9d-67420114b821', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:18:d6', 'vm-uuid': 'e4597066-9505-4b87-8747-5fb38218a4ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:36 np0005539552 NetworkManager[48926]: <info>  [1764404376.3092] manager: (tapbd25bee0-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.314 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.318 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.319 233728 INFO os_vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38')#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.320 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.391 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.392 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.392 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No VIF found with MAC fa:16:3e:15:18:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.393 233728 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Using config drive#033[00m
Nov 29 03:19:36 np0005539552 nova_compute[233724]: 2025-11-29 08:19:36.425 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.320 233728 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Creating config drive at /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/disk.config#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.325 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4lutjvou execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.440 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.459 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4lutjvou" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.489 233728 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image e4597066-9505-4b87-8747-5fb38218a4ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.493 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/disk.config e4597066-9505-4b87-8747-5fb38218a4ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.527 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.528 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.528 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.529 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.529 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.691 233728 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/disk.config e4597066-9505-4b87-8747-5fb38218a4ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.691 233728 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Deleting local config drive /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:37 np0005539552 kernel: tapbd25bee0-38: entered promiscuous mode
Nov 29 03:19:37 np0005539552 NetworkManager[48926]: <info>  [1764404377.7449] manager: (tapbd25bee0-38): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Nov 29 03:19:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:37Z|00492|binding|INFO|Claiming lport bd25bee0-38fc-455c-bf9d-67420114b821 for this chassis.
Nov 29 03:19:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:37Z|00493|binding|INFO|bd25bee0-38fc-455c-bf9d-67420114b821: Claiming fa:16:3e:15:18:d6 10.100.0.13
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.746 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.756 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:18:d6 10.100.0.13'], port_security=['fa:16:3e:15:18:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4597066-9505-4b87-8747-5fb38218a4ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26782821-34df-4010-9d17-f8854e221b4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27fd30263a7f4717b84946720a5770b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45467816-71fb-46ea-84fa-c25f49ea2a6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c8dbca6-8e74-4fae-981d-344fddfca3c7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=bd25bee0-38fc-455c-bf9d-67420114b821) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.757 143400 INFO neutron.agent.ovn.metadata.agent [-] Port bd25bee0-38fc-455c-bf9d-67420114b821 in datapath 26782821-34df-4010-9d17-f8854e221b4e bound to our chassis#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.758 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26782821-34df-4010-9d17-f8854e221b4e#033[00m
Nov 29 03:19:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:37Z|00494|binding|INFO|Setting lport bd25bee0-38fc-455c-bf9d-67420114b821 ovn-installed in OVS
Nov 29 03:19:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:37Z|00495|binding|INFO|Setting lport bd25bee0-38fc-455c-bf9d-67420114b821 up in Southbound
Nov 29 03:19:37 np0005539552 systemd-udevd[281895]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.775 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.776 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[18159efc-0b25-41ce-9d30-bb4fdc7606e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.778 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26782821-31 in ovnmeta-26782821-34df-4010-9d17-f8854e221b4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.780 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26782821-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.780 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b98ee3b7-369b-4432-bfdb-53d966985ce4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.783 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[867c87a0-87e8-4d8d-8160-8fa5a849058b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 systemd-machined[196379]: New machine qemu-48-instance-00000071.
Nov 29 03:19:37 np0005539552 nova_compute[233724]: 2025-11-29 08:19:37.788 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539552 NetworkManager[48926]: <info>  [1764404377.7909] device (tapbd25bee0-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:37 np0005539552 NetworkManager[48926]: <info>  [1764404377.7918] device (tapbd25bee0-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:37 np0005539552 systemd[1]: Started Virtual Machine qemu-48-instance-00000071.
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.796 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[179ac8eb-f097-40c6-985b-6abed42b5196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.809 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8d16da25-7521-4112-a74d-7662590f0202]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.842 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f32d032d-c375-413c-abf5-6f5f6b04e508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.847 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[382adf0a-9d59-492e-a132-349bd074696e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 NetworkManager[48926]: <info>  [1764404377.8487] manager: (tap26782821-30): new Veth device (/org/freedesktop/NetworkManager/Devices/232)
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.877 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8b1227-8576-49aa-b8f9-89ca1c916e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.881 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[321614b3-9484-4020-9fa8-253beabd7054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 NetworkManager[48926]: <info>  [1764404377.9030] device (tap26782821-30): carrier: link connected
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.909 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bcd47e-9572-4de1-99c9-f0171ba8d611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.924 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[deac2a9f-60b6-4fdb-87dd-25569436a989]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26782821-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:13:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742359, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281928, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.939 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eda1d2dd-569e-4f6f-8deb-6155f58891b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:13dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742359, 'tstamp': 742359}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281929, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.953 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[32f7521a-5df5-480a-aa62-99a6a3345ce2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26782821-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:13:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742359, 'reachable_time': 39989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281930, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:37.982 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f2405c0d-8d98-4d04-b8be-2e445724a17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.032 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aebacc50-1dce-4fbd-b45a-1d3354661c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.033 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26782821-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.033 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.034 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26782821-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:38 np0005539552 NetworkManager[48926]: <info>  [1764404378.0363] manager: (tap26782821-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Nov 29 03:19:38 np0005539552 kernel: tap26782821-30: entered promiscuous mode
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.036 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.039 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26782821-30, col_values=(('external_ids', {'iface-id': 'a221b966-8231-4ead-a0ed-2978f32e8746'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.040 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:38Z|00496|binding|INFO|Releasing lport a221b966-8231-4ead-a0ed-2978f32e8746 from this chassis (sb_readonly=0)
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.054 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.057 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.058 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.058 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9e5edc-b455-412b-8fb3-d978baa4b382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.059 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-26782821-34df-4010-9d17-f8854e221b4e
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 26782821-34df-4010-9d17-f8854e221b4e
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:38.060 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'env', 'PROCESS_TAG=haproxy-26782821-34df-4010-9d17-f8854e221b4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26782821-34df-4010-9d17-f8854e221b4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.253 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404378.2527497, e4597066-9505-4b87-8747-5fb38218a4ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.253 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.281 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.284 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404378.2529943, e4597066-9505-4b87-8747-5fb38218a4ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.284 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.307 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.311 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.337 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:38 np0005539552 podman[282004]: 2025-11-29 08:19:38.410303292 +0000 UTC m=+0.043343706 container create 26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:38 np0005539552 systemd[1]: Started libpod-conmon-26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2.scope.
Nov 29 03:19:38 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:19:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a363ed740b5e9574bf92cefb78ede021df82ce44e7cf311aa1125f4ae3141c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:38 np0005539552 podman[282004]: 2025-11-29 08:19:38.477690512 +0000 UTC m=+0.110730956 container init 26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:38 np0005539552 podman[282004]: 2025-11-29 08:19:38.483389976 +0000 UTC m=+0.116430390 container start 26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:19:38 np0005539552 podman[282004]: 2025-11-29 08:19:38.389295408 +0000 UTC m=+0.022335842 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:38 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [NOTICE]   (282024) : New worker (282026) forked
Nov 29 03:19:38 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [NOTICE]   (282024) : Loading success.
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.611 233728 DEBUG nova.compute.manager [req-20318501-cf7d-4b73-8d3c-4019251a7dbe req-fe5cc763-28e6-4341-91d6-c88af1905017 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.612 233728 DEBUG oslo_concurrency.lockutils [req-20318501-cf7d-4b73-8d3c-4019251a7dbe req-fe5cc763-28e6-4341-91d6-c88af1905017 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.612 233728 DEBUG oslo_concurrency.lockutils [req-20318501-cf7d-4b73-8d3c-4019251a7dbe req-fe5cc763-28e6-4341-91d6-c88af1905017 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.612 233728 DEBUG oslo_concurrency.lockutils [req-20318501-cf7d-4b73-8d3c-4019251a7dbe req-fe5cc763-28e6-4341-91d6-c88af1905017 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.612 233728 DEBUG nova.compute.manager [req-20318501-cf7d-4b73-8d3c-4019251a7dbe req-fe5cc763-28e6-4341-91d6-c88af1905017 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Processing event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.613 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.617 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404378.6170382, e4597066-9505-4b87-8747-5fb38218a4ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.617 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.618 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.622 233728 INFO nova.virt.libvirt.driver [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Instance spawned successfully.#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.622 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.635 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.641 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.646 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.646 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.647 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.647 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.648 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.648 233728 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.670 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.700 233728 INFO nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Took 13.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.701 233728 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.725 233728 DEBUG nova.network.neutron [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Updated VIF entry in instance network info cache for port bd25bee0-38fc-455c-bf9d-67420114b821. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.726 233728 DEBUG nova.network.neutron [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Updating instance_info_cache with network_info: [{"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.745 233728 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e4597066-9505-4b87-8747-5fb38218a4ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.796 233728 INFO nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Took 14.19 seconds to build instance.#033[00m
Nov 29 03:19:38 np0005539552 nova_compute[233724]: 2025-11-29 08:19:38.818 233728 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:19:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104703845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:19:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:19:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104703845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:19:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:40.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:40 np0005539552 nova_compute[233724]: 2025-11-29 08:19:40.978 233728 DEBUG nova.compute.manager [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:40 np0005539552 nova_compute[233724]: 2025-11-29 08:19:40.979 233728 DEBUG oslo_concurrency.lockutils [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:40 np0005539552 nova_compute[233724]: 2025-11-29 08:19:40.979 233728 DEBUG oslo_concurrency.lockutils [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:40 np0005539552 nova_compute[233724]: 2025-11-29 08:19:40.979 233728 DEBUG oslo_concurrency.lockutils [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:40 np0005539552 nova_compute[233724]: 2025-11-29 08:19:40.979 233728 DEBUG nova.compute.manager [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] No waiting events found dispatching network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:40 np0005539552 nova_compute[233724]: 2025-11-29 08:19:40.979 233728 WARNING nova.compute.manager [req-6a31ff33-7209-4e8d-b480-35f87ed6280d req-0b5dc0ed-f5fc-451f-8307-e5cb8fc1916a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received unexpected event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.309 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.323 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.529 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.529 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.529 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.530 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.530 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.531 233728 INFO nova.compute.manager [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Terminating instance#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.531 233728 DEBUG nova.compute.manager [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:19:41 np0005539552 kernel: tapbd25bee0-38 (unregistering): left promiscuous mode
Nov 29 03:19:41 np0005539552 NetworkManager[48926]: <info>  [1764404381.7097] device (tapbd25bee0-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.715 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:41Z|00497|binding|INFO|Releasing lport bd25bee0-38fc-455c-bf9d-67420114b821 from this chassis (sb_readonly=0)
Nov 29 03:19:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:41Z|00498|binding|INFO|Setting lport bd25bee0-38fc-455c-bf9d-67420114b821 down in Southbound
Nov 29 03:19:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:41Z|00499|binding|INFO|Removing iface tapbd25bee0-38 ovn-installed in OVS
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.719 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:41.735 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:18:d6 10.100.0.13'], port_security=['fa:16:3e:15:18:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e4597066-9505-4b87-8747-5fb38218a4ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26782821-34df-4010-9d17-f8854e221b4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27fd30263a7f4717b84946720a5770b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45467816-71fb-46ea-84fa-c25f49ea2a6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c8dbca6-8e74-4fae-981d-344fddfca3c7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=bd25bee0-38fc-455c-bf9d-67420114b821) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:41.737 143400 INFO neutron.agent.ovn.metadata.agent [-] Port bd25bee0-38fc-455c-bf9d-67420114b821 in datapath 26782821-34df-4010-9d17-f8854e221b4e unbound from our chassis#033[00m
Nov 29 03:19:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:41.739 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26782821-34df-4010-9d17-f8854e221b4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:41.756 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[446be641-038a-49c1-b44d-cee8c6ce02a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:41.757 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26782821-34df-4010-9d17-f8854e221b4e namespace which is not needed anymore#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.769 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 29 03:19:41 np0005539552 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000071.scope: Consumed 3.499s CPU time.
Nov 29 03:19:41 np0005539552 systemd-machined[196379]: Machine qemu-48-instance-00000071 terminated.
Nov 29 03:19:41 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [NOTICE]   (282024) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:41 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [NOTICE]   (282024) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:41 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [WARNING]  (282024) : Exiting Master process...
Nov 29 03:19:41 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [WARNING]  (282024) : Exiting Master process...
Nov 29 03:19:41 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [ALERT]    (282024) : Current worker (282026) exited with code 143 (Terminated)
Nov 29 03:19:41 np0005539552 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[282020]: [WARNING]  (282024) : All workers exited. Exiting... (0)
Nov 29 03:19:41 np0005539552 systemd[1]: libpod-26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2.scope: Deactivated successfully.
Nov 29 03:19:41 np0005539552 podman[282111]: 2025-11-29 08:19:41.915829363 +0000 UTC m=+0.043780967 container died 26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay-5a363ed740b5e9574bf92cefb78ede021df82ce44e7cf311aa1125f4ae3141c6-merged.mount: Deactivated successfully.
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.971 233728 INFO nova.virt.libvirt.driver [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Instance destroyed successfully.#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.971 233728 DEBUG nova.objects.instance [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'resources' on Instance uuid e4597066-9505-4b87-8747-5fb38218a4ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:41 np0005539552 podman[282111]: 2025-11-29 08:19:41.972098445 +0000 UTC m=+0.100050059 container cleanup 26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:41 np0005539552 systemd[1]: libpod-conmon-26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2.scope: Deactivated successfully.
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.987 233728 DEBUG nova.virt.libvirt.vif [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-1',id=113,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:38Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=e4597066-9505-4b87-8747-5fb38218a4ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.987 233728 DEBUG nova.network.os_vif_util [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "bd25bee0-38fc-455c-bf9d-67420114b821", "address": "fa:16:3e:15:18:d6", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd25bee0-38", "ovs_interfaceid": "bd25bee0-38fc-455c-bf9d-67420114b821", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.988 233728 DEBUG nova.network.os_vif_util [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.989 233728 DEBUG os_vif [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.990 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.990 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd25bee0-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.992 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.994 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539552 nova_compute[233724]: 2025-11-29 08:19:41.997 233728 INFO os_vif [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:18:d6,bridge_name='br-int',has_traffic_filtering=True,id=bd25bee0-38fc-455c-bf9d-67420114b821,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd25bee0-38')#033[00m
Nov 29 03:19:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:42.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:42 np0005539552 podman[282153]: 2025-11-29 08:19:42.186357571 +0000 UTC m=+0.187537629 container remove 26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.194 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2a2901-9278-4cd4-8a93-4d83997634b5]: (4, ('Sat Nov 29 08:19:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e (26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2)\n26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2\nSat Nov 29 08:19:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e (26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2)\n26ab40f8802fc20f2af20515d4f99b85576758be2c2e34061e6fa6c3a7ce41b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.198 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[adad1f75-8c1d-4aca-a5af-af88aeba5abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.199 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26782821-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.202 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:42 np0005539552 kernel: tap26782821-30: left promiscuous mode
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.215 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.219 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1320d2d3-3cc9-4d28-8346-254756683446]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.237 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d916dfa7-9c3a-4a54-90d2-a9d72efa9305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.238 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6c4d36-687e-47b3-bda1-526aa05722b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.252 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fc0593-cfb2-4a2e-82fd-7effd42cff64]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742352, 'reachable_time': 34316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282186, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.255 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26782821-34df-4010-9d17-f8854e221b4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:42.255 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c984fa-9f41-4a8f-b215-7475ff6fb4ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:42 np0005539552 systemd[1]: run-netns-ovnmeta\x2d26782821\x2d34df\x2d4010\x2d9d17\x2df8854e221b4e.mount: Deactivated successfully.
Nov 29 03:19:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.867 233728 INFO nova.virt.libvirt.driver [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Deleting instance files /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed_del#033[00m
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.868 233728 INFO nova.virt.libvirt.driver [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Deletion of /var/lib/nova/instances/e4597066-9505-4b87-8747-5fb38218a4ed_del complete#033[00m
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.915 233728 INFO nova.compute.manager [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Took 1.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.916 233728 DEBUG oslo.service.loopingcall [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.916 233728 DEBUG nova.compute.manager [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:19:42 np0005539552 nova_compute[233724]: 2025-11-29 08:19:42.916 233728 DEBUG nova.network.neutron [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.057 233728 DEBUG nova.compute.manager [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-vif-unplugged-bd25bee0-38fc-455c-bf9d-67420114b821 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.057 233728 DEBUG oslo_concurrency.lockutils [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.058 233728 DEBUG oslo_concurrency.lockutils [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.059 233728 DEBUG oslo_concurrency.lockutils [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.059 233728 DEBUG nova.compute.manager [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] No waiting events found dispatching network-vif-unplugged-bd25bee0-38fc-455c-bf9d-67420114b821 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.059 233728 DEBUG nova.compute.manager [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-vif-unplugged-bd25bee0-38fc-455c-bf9d-67420114b821 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.060 233728 DEBUG nova.compute.manager [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.060 233728 DEBUG oslo_concurrency.lockutils [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.060 233728 DEBUG oslo_concurrency.lockutils [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.061 233728 DEBUG oslo_concurrency.lockutils [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.061 233728 DEBUG nova.compute.manager [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] No waiting events found dispatching network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.061 233728 WARNING nova.compute.manager [req-208234f0-cde0-4c95-94a2-ed7791e7013e req-6ea6f412-02d4-44fd-8026-b599bfaa93df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received unexpected event network-vif-plugged-bd25bee0-38fc-455c-bf9d-67420114b821 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.427 233728 DEBUG nova.network.neutron [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.453 233728 INFO nova.compute.manager [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Took 0.54 seconds to deallocate network for instance.#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.521 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.521 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.528 233728 DEBUG nova.compute.manager [req-d7dc5ba5-4425-4864-b2e4-cf3defef81b7 req-ec843b38-365b-47b5-873e-e5de7359e152 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Received event network-vif-deleted-bd25bee0-38fc-455c-bf9d-67420114b821 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.573 233728 DEBUG oslo_concurrency.processutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.935 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.936 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.953 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:19:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3157995532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.981 233728 DEBUG oslo_concurrency.processutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:43 np0005539552 nova_compute[233724]: 2025-11-29 08:19:43.987 233728 DEBUG nova.compute.provider_tree [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:44 np0005539552 nova_compute[233724]: 2025-11-29 08:19:44.002 233728 DEBUG nova.scheduler.client.report [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:44 np0005539552 nova_compute[233724]: 2025-11-29 08:19:44.028 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:44 np0005539552 nova_compute[233724]: 2025-11-29 08:19:44.055 233728 INFO nova.scheduler.client.report [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Deleted allocations for instance e4597066-9505-4b87-8747-5fb38218a4ed#033[00m
Nov 29 03:19:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:44.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:44 np0005539552 nova_compute[233724]: 2025-11-29 08:19:44.130 233728 DEBUG oslo_concurrency.lockutils [None req-10224294-d0fd-4b4a-8fff-f7c2971bf5f3 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "e4597066-9505-4b87-8747-5fb38218a4ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:46.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:46 np0005539552 nova_compute[233724]: 2025-11-29 08:19:46.325 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539552 nova_compute[233724]: 2025-11-29 08:19:46.994 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:47.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:48.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:50.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:51 np0005539552 nova_compute[233724]: 2025-11-29 08:19:51.327 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:51.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:51 np0005539552 nova_compute[233724]: 2025-11-29 08:19:51.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:52.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:53.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:53.895 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:19:53.896 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:19:53 np0005539552 nova_compute[233724]: 2025-11-29 08:19:53.904 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:54.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:56 np0005539552 nova_compute[233724]: 2025-11-29 08:19:56.329 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:56 np0005539552 nova_compute[233724]: 2025-11-29 08:19:56.969 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404381.9674978, e4597066-9505-4b87-8747-5fb38218a4ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:56 np0005539552 nova_compute[233724]: 2025-11-29 08:19:56.969 233728 INFO nova.compute.manager [-] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:19:56 np0005539552 nova_compute[233724]: 2025-11-29 08:19:56.985 233728 DEBUG nova.compute.manager [None req-28e9569d-7acb-4541-8d8c-7660e26eac1d - - - - - -] [instance: e4597066-9505-4b87-8747-5fb38218a4ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:56 np0005539552 nova_compute[233724]: 2025-11-29 08:19:56.998 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:19:57Z|00500|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:19:57 np0005539552 nova_compute[233724]: 2025-11-29 08:19:57.192 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:57.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:58.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:19:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:59.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 03:20:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:00.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:01 np0005539552 nova_compute[233724]: 2025-11-29 08:20:01.331 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:01.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:02 np0005539552 nova_compute[233724]: 2025-11-29 08:20:02.000 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:03.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:03.899 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:04.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:05.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:05 np0005539552 podman[282276]: 2025-11-29 08:20:05.988521284 +0000 UTC m=+0.063308932 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:20:05 np0005539552 podman[282275]: 2025-11-29 08:20:05.998900893 +0000 UTC m=+0.075016596 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:20:06 np0005539552 podman[282277]: 2025-11-29 08:20:06.022872477 +0000 UTC m=+0.097783168 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:20:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:06.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:06 np0005539552 nova_compute[233724]: 2025-11-29 08:20:06.334 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539552 nova_compute[233724]: 2025-11-29 08:20:07.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:07.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:08 np0005539552 nova_compute[233724]: 2025-11-29 08:20:08.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:08.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:09.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3584073830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:10.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:11 np0005539552 nova_compute[233724]: 2025-11-29 08:20:11.336 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 03:20:12 np0005539552 nova_compute[233724]: 2025-11-29 08:20:12.005 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:12.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:13 np0005539552 nova_compute[233724]: 2025-11-29 08:20:13.050 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:13.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:14.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:15.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:16.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:16 np0005539552 nova_compute[233724]: 2025-11-29 08:20:16.376 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:17 np0005539552 nova_compute[233724]: 2025-11-29 08:20:17.008 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 03:20:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:18.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.597 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.597 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.614 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.687 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.688 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.695 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.696 233728 INFO nova.compute.claims [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:20:18 np0005539552 nova_compute[233724]: 2025-11-29 08:20:18.918 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4070529151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.343 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.350 233728 DEBUG nova.compute.provider_tree [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.364 233728 DEBUG nova.scheduler.client.report [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.382 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.383 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.425 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.425 233728 DEBUG nova.network.neutron [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.452 233728 INFO nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.472 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.549 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.551 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.551 233728 INFO nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Creating image(s)#033[00m
Nov 29 03:20:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:19.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.576 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.602 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.661 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.665 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.697 233728 DEBUG nova.policy [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1552f15deb524705a9456cbe9b54c429', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bace34c102e4d56b089fd695d324f10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.733 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.734 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.734 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.735 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.762 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:19 np0005539552 nova_compute[233724]: 2025-11-29 08:20:19.767 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 07f760bf-6984-45e9-8e85-3d297e812553_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:20.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.321 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 07f760bf-6984-45e9-8e85-3d297e812553_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.384 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] resizing rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:20.626 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:20.627 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:20.627 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.665 233728 DEBUG nova.objects.instance [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid 07f760bf-6984-45e9-8e85-3d297e812553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.679 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.680 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Ensure instance console log exists: /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.680 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.681 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:20 np0005539552 nova_compute[233724]: 2025-11-29 08:20:20.681 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:21 np0005539552 nova_compute[233724]: 2025-11-29 08:20:21.347 233728 DEBUG nova.network.neutron [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Successfully created port: 5511e511-2310-4811-8313-3722fcf49758 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:20:21 np0005539552 nova_compute[233724]: 2025-11-29 08:20:21.379 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:21.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:22 np0005539552 nova_compute[233724]: 2025-11-29 08:20:22.009 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:24.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.356 233728 DEBUG nova.network.neutron [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Successfully updated port: 5511e511-2310-4811-8313-3722fcf49758 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.520 233728 DEBUG nova.compute.manager [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-changed-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.520 233728 DEBUG nova.compute.manager [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Refreshing instance network info cache due to event network-changed-5511e511-2310-4811-8313-3722fcf49758. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.520 233728 DEBUG oslo_concurrency.lockutils [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.521 233728 DEBUG oslo_concurrency.lockutils [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.521 233728 DEBUG nova.network.neutron [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Refreshing network info cache for port 5511e511-2310-4811-8313-3722fcf49758 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:24 np0005539552 nova_compute[233724]: 2025-11-29 08:20:24.565 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:25 np0005539552 nova_compute[233724]: 2025-11-29 08:20:25.117 233728 DEBUG nova.network.neutron [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:25 np0005539552 nova_compute[233724]: 2025-11-29 08:20:25.435 233728 DEBUG nova.network.neutron [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:25 np0005539552 nova_compute[233724]: 2025-11-29 08:20:25.476 233728 DEBUG oslo_concurrency.lockutils [req-0abed140-183f-4d92-bad7-ad92e4f9d889 req-7f2aa0fe-3d59-4d42-9441-4f28710ed971 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:25 np0005539552 nova_compute[233724]: 2025-11-29 08:20:25.477 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:25 np0005539552 nova_compute[233724]: 2025-11-29 08:20:25.477 233728 DEBUG nova.network.neutron [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:25.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:26.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:26 np0005539552 nova_compute[233724]: 2025-11-29 08:20:26.222 233728 DEBUG nova.network.neutron [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:26 np0005539552 nova_compute[233724]: 2025-11-29 08:20:26.382 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.011 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.079 233728 DEBUG nova.network.neutron [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.141 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.142 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Instance network_info: |[{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.144 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Start _get_guest_xml network_info=[{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.148 233728 WARNING nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.154 233728 DEBUG nova.virt.libvirt.host [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.154 233728 DEBUG nova.virt.libvirt.host [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.157 233728 DEBUG nova.virt.libvirt.host [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.157 233728 DEBUG nova.virt.libvirt.host [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.158 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.158 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.159 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.159 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.159 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.159 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.159 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.160 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.160 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.160 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.160 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.161 233728 DEBUG nova.virt.hardware [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.163 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:27.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2137148420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.621 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.653 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:27 np0005539552 nova_compute[233724]: 2025-11-29 08:20:27.658 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1675288545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.088 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.090 233728 DEBUG nova.virt.libvirt.vif [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1370062521',display_name='tempest-ServerActionsTestOtherA-server-1370062521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1370062521',id=117,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-o3h6qlt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=07f760bf-6984-45e9-8e85-3d297e812553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.091 233728 DEBUG nova.network.os_vif_util [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.092 233728 DEBUG nova.network.os_vif_util [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.093 233728 DEBUG nova.objects.instance [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 07f760bf-6984-45e9-8e85-3d297e812553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.108 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <uuid>07f760bf-6984-45e9-8e85-3d297e812553</uuid>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <name>instance-00000075</name>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestOtherA-server-1370062521</nova:name>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:20:27</nova:creationTime>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <nova:port uuid="5511e511-2310-4811-8313-3722fcf49758">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <entry name="serial">07f760bf-6984-45e9-8e85-3d297e812553</entry>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <entry name="uuid">07f760bf-6984-45e9-8e85-3d297e812553</entry>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/07f760bf-6984-45e9-8e85-3d297e812553_disk">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/07f760bf-6984-45e9-8e85-3d297e812553_disk.config">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:f3:b9:47"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <target dev="tap5511e511-23"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/console.log" append="off"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:20:28 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:20:28 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:20:28 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:20:28 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.109 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Preparing to wait for external event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.109 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.110 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.110 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.110 233728 DEBUG nova.virt.libvirt.vif [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1370062521',display_name='tempest-ServerActionsTestOtherA-server-1370062521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1370062521',id=117,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-o3h6qlt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=07f760bf-6984-45e9-8e85-3d297e812553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.111 233728 DEBUG nova.network.os_vif_util [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.111 233728 DEBUG nova.network.os_vif_util [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.112 233728 DEBUG os_vif [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.112 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.113 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.113 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.116 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.116 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5511e511-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.116 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5511e511-23, col_values=(('external_ids', {'iface-id': '5511e511-2310-4811-8313-3722fcf49758', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:b9:47', 'vm-uuid': '07f760bf-6984-45e9-8e85-3d297e812553'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:28 np0005539552 NetworkManager[48926]: <info>  [1764404428.1188] manager: (tap5511e511-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.120 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.127 233728 INFO os_vif [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23')#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.181 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.182 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.182 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:f3:b9:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.183 233728 INFO nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Using config drive#033[00m
Nov 29 03:20:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:28.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.211 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.544 233728 INFO nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Creating config drive at /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/disk.config#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.557 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppieifomg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.713 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppieifomg" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.746 233728 DEBUG nova.storage.rbd_utils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 07f760bf-6984-45e9-8e85-3d297e812553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.750 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/disk.config 07f760bf-6984-45e9-8e85-3d297e812553_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.918 233728 DEBUG oslo_concurrency.processutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/disk.config 07f760bf-6984-45e9-8e85-3d297e812553_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.919 233728 INFO nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Deleting local config drive /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.961 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.961 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.962 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.962 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.962 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:28 np0005539552 kernel: tap5511e511-23: entered promiscuous mode
Nov 29 03:20:28 np0005539552 NetworkManager[48926]: <info>  [1764404428.9649] manager: (tap5511e511-23): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Nov 29 03:20:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:28Z|00501|binding|INFO|Claiming lport 5511e511-2310-4811-8313-3722fcf49758 for this chassis.
Nov 29 03:20:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:28Z|00502|binding|INFO|5511e511-2310-4811-8313-3722fcf49758: Claiming fa:16:3e:f3:b9:47 10.100.0.8
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.973 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:b9:47 10.100.0.8'], port_security=['fa:16:3e:f3:b9:47 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07f760bf-6984-45e9-8e85-3d297e812553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7465c0fc-60f6-4695-93cd-f6ab8b97c365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5511e511-2310-4811-8313-3722fcf49758) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.974 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5511e511-2310-4811-8313-3722fcf49758 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.975 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:20:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:28Z|00503|binding|INFO|Setting lport 5511e511-2310-4811-8313-3722fcf49758 ovn-installed in OVS
Nov 29 03:20:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:28Z|00504|binding|INFO|Setting lport 5511e511-2310-4811-8313-3722fcf49758 up in Southbound
Nov 29 03:20:28 np0005539552 nova_compute[233724]: 2025-11-29 08:20:28.987 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.989 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[23f996ed-053e-4d54-a41a-32922660e884]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.990 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fc1dfc3-81 in ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.992 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fc1dfc3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.992 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[65cf7ef8-cb6c-4795-84f5-33cbfad529a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:28.993 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d7c4eb3-e974-43c9-8635-844aab8dafd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539552 systemd-machined[196379]: New machine qemu-49-instance-00000075.
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.004 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b69ab0a2-d776-4c7b-8eba-ba9668ceee9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 systemd[1]: Started Virtual Machine qemu-49-instance-00000075.
Nov 29 03:20:29 np0005539552 systemd-udevd[282736]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.024 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c75d926a-6740-4a88-8afc-e9389f19977e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 NetworkManager[48926]: <info>  [1764404429.0319] device (tap5511e511-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:29 np0005539552 NetworkManager[48926]: <info>  [1764404429.0329] device (tap5511e511-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.058 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[91029eae-2273-488c-b1f7-b562f6a3ffab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.062 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e77237f8-18f5-443d-aef3-55d12be77f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 NetworkManager[48926]: <info>  [1764404429.0635] manager: (tap7fc1dfc3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Nov 29 03:20:29 np0005539552 systemd-udevd[282738]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.098 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[cda1d611-eb08-4757-a548-70d6e75c6413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.103 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1b810148-1c69-4b8b-af1e-bf3ee95348fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 NetworkManager[48926]: <info>  [1764404429.1254] device (tap7fc1dfc3-80): carrier: link connected
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.133 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f19496ed-88a4-494e-9bb6-e1c3de133df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.151 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a855955e-b4fe-4814-a055-6de9fa09aa28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282785, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.163 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[820d0c86-232b-4c8e-8077-bd470f8171bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:273e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747481, 'tstamp': 747481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282786, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.178 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b63a9ce-e135-4a0f-9f6a-0dd5f1da03f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282787, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.205 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4b175762-a356-497a-8517-1175de1eb978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.264 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a67e874a-1bf2-468d-b92a-029e2b7a2e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.265 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.265 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.266 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:29 np0005539552 NetworkManager[48926]: <info>  [1764404429.2683] manager: (tap7fc1dfc3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Nov 29 03:20:29 np0005539552 kernel: tap7fc1dfc3-80: entered promiscuous mode
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.273 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.273 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:29Z|00505|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.277 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.285 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b84cfa5-7466-4369-90a1-1eeb5f915048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.286 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:20:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:29.287 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'env', 'PROCESS_TAG=haproxy-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fc1dfc3-8d7f-4854-980d-37a93f366035.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.290 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/328645500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.416 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.460 233728 DEBUG nova.compute.manager [req-ebfe8a26-182d-4a18-b9ce-f5b3a83080d2 req-22c6647f-c13c-4ee8-98fc-b9843103a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.461 233728 DEBUG oslo_concurrency.lockutils [req-ebfe8a26-182d-4a18-b9ce-f5b3a83080d2 req-22c6647f-c13c-4ee8-98fc-b9843103a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.461 233728 DEBUG oslo_concurrency.lockutils [req-ebfe8a26-182d-4a18-b9ce-f5b3a83080d2 req-22c6647f-c13c-4ee8-98fc-b9843103a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.462 233728 DEBUG oslo_concurrency.lockutils [req-ebfe8a26-182d-4a18-b9ce-f5b3a83080d2 req-22c6647f-c13c-4ee8-98fc-b9843103a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.462 233728 DEBUG nova.compute.manager [req-ebfe8a26-182d-4a18-b9ce-f5b3a83080d2 req-22c6647f-c13c-4ee8-98fc-b9843103a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Processing event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.485 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.485 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.489 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.489 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.673 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.674 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4172MB free_disk=20.836986541748047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.674 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.675 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:29 np0005539552 podman[282822]: 2025-11-29 08:20:29.680825978 +0000 UTC m=+0.066155058 container create d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:20:29 np0005539552 systemd[1]: Started libpod-conmon-d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a.scope.
Nov 29 03:20:29 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:20:29 np0005539552 podman[282822]: 2025-11-29 08:20:29.64368032 +0000 UTC m=+0.029009420 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:20:29 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63430c57f5c5c618653a2abbfb7602362c632d5722556f56102ab91da5ff6a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:29 np0005539552 podman[282822]: 2025-11-29 08:20:29.74974884 +0000 UTC m=+0.135077920 container init d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:29 np0005539552 podman[282822]: 2025-11-29 08:20:29.754451346 +0000 UTC m=+0.139780436 container start d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:20:29 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [NOTICE]   (282882) : New worker (282884) forked
Nov 29 03:20:29 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [NOTICE]   (282882) : Loading success.
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.819 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404429.8194094, 07f760bf-6984-45e9-8e85-3d297e812553 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.820 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.821 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.823 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.826 233728 INFO nova.virt.libvirt.driver [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Instance spawned successfully.#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.826 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.849 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.853 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.855 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.856 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.856 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.856 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.857 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.857 233728 DEBUG nova.virt.libvirt.driver [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.885 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.886 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404429.8201544, 07f760bf-6984-45e9-8e85-3d297e812553 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.886 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.902 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 258dfc76-0ea9-4521-a3fc-5d64b3632451 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.902 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 07f760bf-6984-45e9-8e85-3d297e812553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.903 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.903 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.919 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.921 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404429.8233428, 07f760bf-6984-45e9-8e85-3d297e812553 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.921 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.928 233728 INFO nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Took 10.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.929 233728 DEBUG nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.958 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.960 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:29 np0005539552 nova_compute[233724]: 2025-11-29 08:20:29.988 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.002 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.030 233728 INFO nova.compute.manager [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Took 11.37 seconds to build instance.#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.067 233728 DEBUG oslo_concurrency.lockutils [None req-92291552-da3f-47ff-8764-fb05399788d5 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:30.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1256861687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.458 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.462 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.486 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.517 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:20:30 np0005539552 nova_compute[233724]: 2025-11-29 08:20:30.517 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.384 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.557 233728 DEBUG nova.compute.manager [req-d4759b85-f124-4bd3-b5e1-234966ad640b req-8c0f52db-1b8f-4064-9355-b95e87aab087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.558 233728 DEBUG oslo_concurrency.lockutils [req-d4759b85-f124-4bd3-b5e1-234966ad640b req-8c0f52db-1b8f-4064-9355-b95e87aab087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.558 233728 DEBUG oslo_concurrency.lockutils [req-d4759b85-f124-4bd3-b5e1-234966ad640b req-8c0f52db-1b8f-4064-9355-b95e87aab087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.559 233728 DEBUG oslo_concurrency.lockutils [req-d4759b85-f124-4bd3-b5e1-234966ad640b req-8c0f52db-1b8f-4064-9355-b95e87aab087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.559 233728 DEBUG nova.compute.manager [req-d4759b85-f124-4bd3-b5e1-234966ad640b req-8c0f52db-1b8f-4064-9355-b95e87aab087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] No waiting events found dispatching network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:31 np0005539552 nova_compute[233724]: 2025-11-29 08:20:31.560 233728 WARNING nova.compute.manager [req-d4759b85-f124-4bd3-b5e1-234966ad640b req-8c0f52db-1b8f-4064-9355-b95e87aab087 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received unexpected event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:20:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:20:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:32.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:33 np0005539552 nova_compute[233724]: 2025-11-29 08:20:33.120 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:33.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:34.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:34 np0005539552 nova_compute[233724]: 2025-11-29 08:20:34.514 233728 DEBUG nova.compute.manager [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-changed-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:34 np0005539552 nova_compute[233724]: 2025-11-29 08:20:34.515 233728 DEBUG nova.compute.manager [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Refreshing instance network info cache due to event network-changed-5511e511-2310-4811-8313-3722fcf49758. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:34 np0005539552 nova_compute[233724]: 2025-11-29 08:20:34.515 233728 DEBUG oslo_concurrency.lockutils [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:34 np0005539552 nova_compute[233724]: 2025-11-29 08:20:34.515 233728 DEBUG oslo_concurrency.lockutils [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:34 np0005539552 nova_compute[233724]: 2025-11-29 08:20:34.515 233728 DEBUG nova.network.neutron [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Refreshing network info cache for port 5511e511-2310-4811-8313-3722fcf49758 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.498 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.499 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.499 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.499 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.500 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.500 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:20:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:35.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:20:35 np0005539552 nova_compute[233724]: 2025-11-29 08:20:35.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.093 233728 DEBUG nova.network.neutron [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updated VIF entry in instance network info cache for port 5511e511-2310-4811-8313-3722fcf49758. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.094 233728 DEBUG nova.network.neutron [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.112 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.113 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.113 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.113 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.145 233728 DEBUG oslo_concurrency.lockutils [req-f0b03224-2994-4a6b-ba0b-0fe625481501 req-feb72ba8-e0a2-4c6f-a948-443aab649050 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:36.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:36 np0005539552 nova_compute[233724]: 2025-11-29 08:20:36.386 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539552 podman[282923]: 2025-11-29 08:20:36.978556613 +0000 UTC m=+0.063009894 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:20:36 np0005539552 podman[282922]: 2025-11-29 08:20:36.982229201 +0000 UTC m=+0.072822117 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:20:37 np0005539552 podman[282924]: 2025-11-29 08:20:37.037499576 +0000 UTC m=+0.122632315 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:20:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:37.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:37 np0005539552 nova_compute[233724]: 2025-11-29 08:20:37.737 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:37 np0005539552 nova_compute[233724]: 2025-11-29 08:20:37.754 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:37 np0005539552 nova_compute[233724]: 2025-11-29 08:20:37.755 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:20:37 np0005539552 nova_compute[233724]: 2025-11-29 08:20:37.755 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:37 np0005539552 nova_compute[233724]: 2025-11-29 08:20:37.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:38 np0005539552 nova_compute[233724]: 2025-11-29 08:20:38.123 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:38.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:39.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:39.898 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:39 np0005539552 nova_compute[233724]: 2025-11-29 08:20:39.899 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:39.900 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:20:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:41 np0005539552 nova_compute[233724]: 2025-11-29 08:20:41.388 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:41 np0005539552 nova_compute[233724]: 2025-11-29 08:20:41.432 233728 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:41 np0005539552 nova_compute[233724]: 2025-11-29 08:20:41.433 233728 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:41 np0005539552 nova_compute[233724]: 2025-11-29 08:20:41.433 233728 DEBUG nova.network.neutron [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:41.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:41Z|00506|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:20:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:41Z|00507|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:20:41 np0005539552 nova_compute[233724]: 2025-11-29 08:20:41.974 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:42.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:42 np0005539552 nova_compute[233724]: 2025-11-29 08:20:42.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:42 np0005539552 nova_compute[233724]: 2025-11-29 08:20:42.924 233728 DEBUG nova.network.neutron [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:42 np0005539552 nova_compute[233724]: 2025-11-29 08:20:42.943 233728 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:43Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:b9:47 10.100.0.8
Nov 29 03:20:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:43Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:b9:47 10.100.0.8
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.033 233728 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.034 233728 DEBUG nova.virt.libvirt.volume.remotefs [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Creating file /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/32da07062069414bbca7af3971abeb64.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.035 233728 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/32da07062069414bbca7af3971abeb64.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.125 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.545 233728 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/32da07062069414bbca7af3971abeb64.tmp" returned: 1 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.546 233728 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/32da07062069414bbca7af3971abeb64.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.546 233728 DEBUG nova.virt.libvirt.volume.remotefs [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Creating directory /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.546 233728 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:43.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.763 233728 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:43 np0005539552 nova_compute[233724]: 2025-11-29 08:20:43.771 233728 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:20:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:44.902 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:45.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:46 np0005539552 kernel: tap524180cf-27 (unregistering): left promiscuous mode
Nov 29 03:20:46 np0005539552 NetworkManager[48926]: <info>  [1764404446.0775] device (tap524180cf-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:46Z|00508|binding|INFO|Releasing lport 524180cf-279c-48d6-8bf1-04f8f159aef6 from this chassis (sb_readonly=0)
Nov 29 03:20:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:46Z|00509|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 down in Southbound
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.093 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:20:46Z|00510|binding|INFO|Removing iface tap524180cf-27 ovn-installed in OVS
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.102 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.104 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.107 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.109 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7e58903a-0a95-4536-9c66-0c63199f0ff9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.111 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:20:46 np0005539552 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:20:46 np0005539552 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006c.scope: Consumed 17.124s CPU time.
Nov 29 03:20:46 np0005539552 systemd-machined[196379]: Machine qemu-47-instance-0000006c terminated.
Nov 29 03:20:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [NOTICE]   (280778) : haproxy version is 2.8.14-c23fe91
Nov 29 03:20:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [NOTICE]   (280778) : path to executable is /usr/sbin/haproxy
Nov 29 03:20:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [WARNING]  (280778) : Exiting Master process...
Nov 29 03:20:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [WARNING]  (280778) : Exiting Master process...
Nov 29 03:20:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [ALERT]    (280778) : Current worker (280780) exited with code 143 (Terminated)
Nov 29 03:20:46 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[280774]: [WARNING]  (280778) : All workers exited. Exiting... (0)
Nov 29 03:20:46 np0005539552 systemd[1]: libpod-a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8.scope: Deactivated successfully.
Nov 29 03:20:46 np0005539552 podman[283195]: 2025-11-29 08:20:46.307334576 +0000 UTC m=+0.074294777 container died a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:20:46 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8-userdata-shm.mount: Deactivated successfully.
Nov 29 03:20:46 np0005539552 systemd[1]: var-lib-containers-storage-overlay-41967c2e3bb4c46a5e5a42fa9884f6669e59e03429fd544dfad50c07ff3d3f2e-merged.mount: Deactivated successfully.
Nov 29 03:20:46 np0005539552 podman[283195]: 2025-11-29 08:20:46.357557345 +0000 UTC m=+0.124517516 container cleanup a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:20:46 np0005539552 systemd[1]: libpod-conmon-a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8.scope: Deactivated successfully.
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.390 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 podman[283237]: 2025-11-29 08:20:46.426279571 +0000 UTC m=+0.045121153 container remove a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.434 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3bace2b3-0faf-4b4c-bd4c-2db7f3b4e5cb]: (4, ('Sat Nov 29 08:20:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8)\na6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8\nSat Nov 29 08:20:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (a6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8)\na6b40a28fa47a9502647693fbaa1de68d3cc6a77c76fc707b17e7d38f32ae1f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.435 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b48a820e-1cf2-4e33-9adc-8be0e7b4bd10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.436 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.439 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.474 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee45b0e-b213-444b-b1f1-fd65946a2637]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.493 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6c74e078-3fb2-48df-ad7b-acfe78f5cacf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.494 233728 DEBUG nova.compute.manager [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.494 233728 DEBUG oslo_concurrency.lockutils [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.495 233728 DEBUG oslo_concurrency.lockutils [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.494 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1770117b-58b9-4ae6-b05e-680e803ba8fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.495 233728 DEBUG oslo_concurrency.lockutils [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.495 233728 DEBUG nova.compute.manager [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.496 233728 WARNING nova.compute.manager [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.516 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9059f88f-9e1d-4280-85ff-b673e2ac70da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 740143, 'reachable_time': 19835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283255, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.518 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:20:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:20:46.518 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[69473b4a-0882-4422-8661-3c69098ef441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:46 np0005539552 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.791 233728 INFO nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.797 233728 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance destroyed successfully.#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.798 233728 DEBUG nova.virt.libvirt.vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.799 233728 DEBUG nova.network.os_vif_util [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.800 233728 DEBUG nova.network.os_vif_util [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.800 233728 DEBUG os_vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.802 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.802 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap524180cf-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.804 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.805 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.807 233728 INFO os_vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.812 233728 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.812 233728 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:46 np0005539552 nova_compute[233724]: 2025-11-29 08:20:46.959 233728 DEBUG neutronclient.v2_0.client [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 524180cf-279c-48d6-8bf1-04f8f159aef6 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:20:47 np0005539552 nova_compute[233724]: 2025-11-29 08:20:47.073 233728 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539552 nova_compute[233724]: 2025-11-29 08:20:47.074 233728 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539552 nova_compute[233724]: 2025-11-29 08:20:47.075 233728 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:47.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:47.999 233728 DEBUG nova.compute.manager [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:47.999 233728 DEBUG nova.compute.manager [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing instance network info cache due to event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.000 233728 DEBUG oslo_concurrency.lockutils [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.000 233728 DEBUG oslo_concurrency.lockutils [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.000 233728 DEBUG nova.network.neutron [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:48.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.797 233728 DEBUG nova.compute.manager [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.798 233728 DEBUG oslo_concurrency.lockutils [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.798 233728 DEBUG oslo_concurrency.lockutils [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.798 233728 DEBUG oslo_concurrency.lockutils [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.798 233728 DEBUG nova.compute.manager [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:48 np0005539552 nova_compute[233724]: 2025-11-29 08:20:48.798 233728 WARNING nova.compute.manager [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:20:49 np0005539552 nova_compute[233724]: 2025-11-29 08:20:49.525 233728 DEBUG nova.network.neutron [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated VIF entry in instance network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:49 np0005539552 nova_compute[233724]: 2025-11-29 08:20:49.526 233728 DEBUG nova.network.neutron [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:49 np0005539552 nova_compute[233724]: 2025-11-29 08:20:49.539 233728 DEBUG oslo_concurrency.lockutils [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 03:20:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:49.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:50.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:20:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3442838477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:20:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:20:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3442838477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:20:51 np0005539552 nova_compute[233724]: 2025-11-29 08:20:51.036 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539552 nova_compute[233724]: 2025-11-29 08:20:51.392 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:51.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:51 np0005539552 nova_compute[233724]: 2025-11-29 08:20:51.805 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:52.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:54.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.542 233728 DEBUG nova.compute.manager [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.543 233728 DEBUG oslo_concurrency.lockutils [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.543 233728 DEBUG oslo_concurrency.lockutils [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.543 233728 DEBUG oslo_concurrency.lockutils [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.544 233728 DEBUG nova.compute.manager [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.544 233728 WARNING nova.compute.manager [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.646 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.647 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:54 np0005539552 nova_compute[233724]: 2025-11-29 08:20:54.647 233728 DEBUG nova.compute.manager [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:20:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:55.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:56.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.246 233728 DEBUG neutronclient.v2_0.client [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 524180cf-279c-48d6-8bf1-04f8f159aef6 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.247 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.247 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.248 233728 DEBUG nova.network.neutron [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.248 233728 DEBUG nova.objects.instance [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'info_cache' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.394 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.722 233728 DEBUG nova.compute.manager [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.722 233728 DEBUG oslo_concurrency.lockutils [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.723 233728 DEBUG oslo_concurrency.lockutils [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.723 233728 DEBUG oslo_concurrency.lockutils [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.723 233728 DEBUG nova.compute.manager [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.723 233728 WARNING nova.compute.manager [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:20:56 np0005539552 nova_compute[233724]: 2025-11-29 08:20:56.807 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:57.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:58 np0005539552 nova_compute[233724]: 2025-11-29 08:20:58.758 233728 DEBUG nova.network.neutron [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:59 np0005539552 nova_compute[233724]: 2025-11-29 08:20:59.339 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:59 np0005539552 nova_compute[233724]: 2025-11-29 08:20:59.339 233728 DEBUG nova.objects.instance [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:59 np0005539552 nova_compute[233724]: 2025-11-29 08:20:59.446 233728 DEBUG nova.storage.rbd_utils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] removing snapshot(nova-resize) on rbd image(258dfc76-0ea9-4521-a3fc-5d64b3632451_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:20:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:20:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:00.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.333 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404446.3322806, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.334 233728 INFO nova.compute.manager [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.386 233728 DEBUG nova.compute.manager [None req-504b1357-79ee-4ef5-bcc1-9975ed0a2788 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.389 233728 DEBUG nova.compute.manager [None req-504b1357-79ee-4ef5-bcc1-9975ed0a2788 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.396 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.478 233728 INFO nova.compute.manager [None req-504b1357-79ee-4ef5-bcc1-9975ed0a2788 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Nov 29 03:21:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:01.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.677 233728 DEBUG nova.virt.libvirt.vif [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.678 233728 DEBUG nova.network.os_vif_util [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.679 233728 DEBUG nova.network.os_vif_util [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.679 233728 DEBUG os_vif [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.681 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.681 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap524180cf-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.681 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.684 233728 INFO os_vif [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.684 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.684 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.810 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.881 233728 DEBUG oslo_concurrency.processutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:01Z|00511|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:21:01 np0005539552 nova_compute[233724]: 2025-11-29 08:21:01.954 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:02.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1103106095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:02 np0005539552 nova_compute[233724]: 2025-11-29 08:21:02.299 233728 DEBUG oslo_concurrency.processutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:02 np0005539552 nova_compute[233724]: 2025-11-29 08:21:02.308 233728 DEBUG nova.compute.provider_tree [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:02 np0005539552 nova_compute[233724]: 2025-11-29 08:21:02.379 233728 DEBUG nova.scheduler.client.report [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:02 np0005539552 nova_compute[233724]: 2025-11-29 08:21:02.883 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539552 nova_compute[233724]: 2025-11-29 08:21:03.164 233728 INFO nova.scheduler.client.report [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Deleted allocation for migration 638f6982-41f5-48ad-af75-0ac5e6748344#033[00m
Nov 29 03:21:03 np0005539552 nova_compute[233724]: 2025-11-29 08:21:03.411 233728 DEBUG oslo_concurrency.lockutils [None req-a957dede-48ec-4b45-b91d-3d68c66ad51c 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:03.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:05.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:06 np0005539552 nova_compute[233724]: 2025-11-29 08:21:06.398 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539552 nova_compute[233724]: 2025-11-29 08:21:06.812 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:07.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:07 np0005539552 podman[283429]: 2025-11-29 08:21:07.982602495 +0000 UTC m=+0.066442736 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:21:07 np0005539552 podman[283428]: 2025-11-29 08:21:07.987704942 +0000 UTC m=+0.076853026 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:21:08 np0005539552 podman[283430]: 2025-11-29 08:21:08.009564439 +0000 UTC m=+0.098008374 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:21:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:08.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:09.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:10.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:11 np0005539552 nova_compute[233724]: 2025-11-29 08:21:11.399 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:11 np0005539552 nova_compute[233724]: 2025-11-29 08:21:11.814 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:12.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:14.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.622 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.623 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.648 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.738 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.739 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.749 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.749 233728 INFO nova.compute.claims [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.835 233728 DEBUG nova.scheduler.client.report [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.856 233728 DEBUG nova.scheduler.client.report [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.857 233728 DEBUG nova.compute.provider_tree [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.870 233728 DEBUG nova.scheduler.client.report [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.889 233728 DEBUG nova.scheduler.client.report [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:21:14 np0005539552 nova_compute[233724]: 2025-11-29 08:21:14.930 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/259900248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.359 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.365 233728 DEBUG nova.compute.provider_tree [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.379 233728 DEBUG nova.scheduler.client.report [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.399 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.400 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.446 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.447 233728 DEBUG nova.network.neutron [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.468 233728 INFO nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.490 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.588 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.589 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.590 233728 INFO nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Creating image(s)#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.613 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.638 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:15.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.662 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.665 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.730 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.731 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.732 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.733 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.758 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:15 np0005539552 nova_compute[233724]: 2025-11-29 08:21:15.760 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5004bd0f-c699-46d7-b535-b3a7db186a87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.061 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5004bd0f-c699-46d7-b535-b3a7db186a87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.125 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] resizing rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.252 233728 DEBUG nova.objects.instance [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:16.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.277 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.278 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Ensure instance console log exists: /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.279 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.279 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.280 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.317 233728 DEBUG nova.policy [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '80ceb9112b3a4f119c05f21fd617af11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '26e3508b949a4dbf960d7befc8f27869', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539552 nova_compute[233724]: 2025-11-29 08:21:16.816 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:17 np0005539552 nova_compute[233724]: 2025-11-29 08:21:17.266 233728 DEBUG nova.network.neutron [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Successfully created port: d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:17.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.553 233728 DEBUG nova.network.neutron [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Successfully updated port: d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.571 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.571 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.571 233728 DEBUG nova.network.neutron [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.638 233728 DEBUG nova.compute.manager [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.639 233728 DEBUG nova.compute.manager [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing instance network info cache due to event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:18 np0005539552 nova_compute[233724]: 2025-11-29 08:21:18.639 233728 DEBUG oslo_concurrency.lockutils [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:19 np0005539552 nova_compute[233724]: 2025-11-29 08:21:19.278 233728 DEBUG nova.network.neutron [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:19.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.110 233728 DEBUG nova.network.neutron [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.141 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.142 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance network_info: |[{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.142 233728 DEBUG oslo_concurrency.lockutils [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.143 233728 DEBUG nova.network.neutron [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.146 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start _get_guest_xml network_info=[{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.150 233728 WARNING nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.156 233728 DEBUG nova.virt.libvirt.host [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.157 233728 DEBUG nova.virt.libvirt.host [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.161 233728 DEBUG nova.virt.libvirt.host [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.161 233728 DEBUG nova.virt.libvirt.host [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.163 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.163 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.164 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.164 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.164 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.164 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.165 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.165 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.165 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.165 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.166 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.166 233728 DEBUG nova.virt.hardware [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.170 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3677631476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.619 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:20.627 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:20.628 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:20.628 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.643 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:20 np0005539552 nova_compute[233724]: 2025-11-29 08:21:20.647 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3171391090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.067 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.069 233728 DEBUG nova.virt.libvirt.vif [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.069 233728 DEBUG nova.network.os_vif_util [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.070 233728 DEBUG nova.network.os_vif_util [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.072 233728 DEBUG nova.objects.instance [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.160 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <uuid>5004bd0f-c699-46d7-b535-b3a7db186a87</uuid>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <name>instance-00000079</name>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestJSON-server-1253218278</nova:name>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:21:20</nova:creationTime>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <nova:port uuid="d5e26252-e0d3-4a6b-8b18-b2f4cb7db432">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <entry name="serial">5004bd0f-c699-46d7-b535-b3a7db186a87</entry>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <entry name="uuid">5004bd0f-c699-46d7-b535-b3a7db186a87</entry>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/5004bd0f-c699-46d7-b535-b3a7db186a87_disk">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:47:25:11"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <target dev="tapd5e26252-e0"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/console.log" append="off"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:21:21 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:21:21 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:21:21 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:21:21 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.162 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Preparing to wait for external event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.162 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.162 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.163 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.163 233728 DEBUG nova.virt.libvirt.vif [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.163 233728 DEBUG nova.network.os_vif_util [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.164 233728 DEBUG nova.network.os_vif_util [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.164 233728 DEBUG os_vif [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.165 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.166 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.166 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.169 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e26252-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.170 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5e26252-e0, col_values=(('external_ids', {'iface-id': 'd5e26252-e0d3-4a6b-8b18-b2f4cb7db432', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:25:11', 'vm-uuid': '5004bd0f-c699-46d7-b535-b3a7db186a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.172 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539552 NetworkManager[48926]: <info>  [1764404481.1731] manager: (tapd5e26252-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.174 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.180 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.181 233728 INFO os_vif [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0')#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.237 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.237 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.237 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:47:25:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.238 233728 INFO nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Using config drive#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.264 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.403 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.574 233728 INFO nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Creating config drive at /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/disk.config#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.581 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9cb9l4qz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.615 233728 DEBUG nova.network.neutron [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updated VIF entry in instance network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.616 233728 DEBUG nova.network.neutron [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.637 233728 DEBUG oslo_concurrency.lockutils [req-55e4262c-c8bc-40c3-9358-e8db40a1fc89 req-3432f458-ae68-4628-ab73-f7de430ba5e9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:21.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.724 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9cb9l4qz" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.766 233728 DEBUG nova.storage.rbd_utils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:21 np0005539552 nova_compute[233724]: 2025-11-29 08:21:21.770 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/disk.config 5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.387 233728 DEBUG oslo_concurrency.processutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/disk.config 5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.388 233728 INFO nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Deleting local config drive /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:23 np0005539552 kernel: tapd5e26252-e0: entered promiscuous mode
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.433 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:23Z|00512|binding|INFO|Claiming lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for this chassis.
Nov 29 03:21:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:23Z|00513|binding|INFO|d5e26252-e0d3-4a6b-8b18-b2f4cb7db432: Claiming fa:16:3e:47:25:11 10.100.0.14
Nov 29 03:21:23 np0005539552 NetworkManager[48926]: <info>  [1764404483.4355] manager: (tapd5e26252-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.446 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:25:11 10.100.0.14'], port_security=['fa:16:3e:47:25:11 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5004bd0f-c699-46d7-b535-b3a7db186a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.447 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.448 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:21:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:23Z|00514|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 ovn-installed in OVS
Nov 29 03:21:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:23Z|00515|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 up in Southbound
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.459 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[23e14440-1f15-411d-b267-8e709e88ea3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.460 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.461 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.461 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8742de78-e2bc-4ef2-8d24-771eb3b40d08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.462 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2185a7fd-1038-486a-ac10-b946aa79c2fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 systemd-udevd[283878]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.473 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[73219e14-609f-4515-89a9-92981ff19b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 systemd-machined[196379]: New machine qemu-50-instance-00000079.
Nov 29 03:21:23 np0005539552 NetworkManager[48926]: <info>  [1764404483.4801] device (tapd5e26252-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:23 np0005539552 NetworkManager[48926]: <info>  [1764404483.4814] device (tapd5e26252-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.487 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb1df52-0025-46e5-9edc-b1e6932d3072]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 systemd[1]: Started Virtual Machine qemu-50-instance-00000079.
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.517 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6405d84f-43e8-4e22-9a24-d70be25fbf18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 systemd-udevd[283883]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.522 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[abbb868c-4d22-4f62-86dd-ebb5f5db1bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 NetworkManager[48926]: <info>  [1764404483.5237] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/240)
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.564 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e62b2342-7a4f-4a7b-8249-5643fb146562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.567 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[022e38b0-b98a-4a7f-999d-67223fe7d37b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 NetworkManager[48926]: <info>  [1764404483.5935] device (tap58fd104d-40): carrier: link connected
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.600 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6ee604-270e-4c1a-9f77-7a131ef7f8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.617 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5add59-2599-4000-b246-1dc8e6ca843a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752928, 'reachable_time': 41770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283911, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.634 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b58fe2d-28e1-4276-aba2-9b7d485e8882]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752928, 'tstamp': 752928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283912, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:23.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.655 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[00e31a31-0a31-4e70-b1bf-faa6de22fb8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752928, 'reachable_time': 41770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283913, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.688 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2f11f4a1-ee60-4b60-8540-4b3a48f1afdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.740 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ee580f21-cfd6-4e11-a2ad-14df367def1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.740 233728 DEBUG nova.compute.manager [req-d38c61e4-33a0-40a3-80d5-5a6e93193cc9 req-878991df-a7a3-4ae7-b847-d243af2f17ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.741 233728 DEBUG oslo_concurrency.lockutils [req-d38c61e4-33a0-40a3-80d5-5a6e93193cc9 req-878991df-a7a3-4ae7-b847-d243af2f17ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.741 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.741 233728 DEBUG oslo_concurrency.lockutils [req-d38c61e4-33a0-40a3-80d5-5a6e93193cc9 req-878991df-a7a3-4ae7-b847-d243af2f17ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.741 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.742 233728 DEBUG oslo_concurrency.lockutils [req-d38c61e4-33a0-40a3-80d5-5a6e93193cc9 req-878991df-a7a3-4ae7-b847-d243af2f17ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.742 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.742 233728 DEBUG nova.compute.manager [req-d38c61e4-33a0-40a3-80d5-5a6e93193cc9 req-878991df-a7a3-4ae7-b847-d243af2f17ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Processing event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:23 np0005539552 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.743 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 NetworkManager[48926]: <info>  [1764404483.7448] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.747 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.748 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.750 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:23Z|00516|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:21:23 np0005539552 nova_compute[233724]: 2025-11-29 08:21:23.765 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.767 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.768 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4495b839-ebbc-45f1-a298-57ee0ce7cb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.769 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:23.771 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:24 np0005539552 podman[283980]: 2025-11-29 08:21:24.18463094 +0000 UTC m=+0.071284526 container create 4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:21:24 np0005539552 systemd[1]: Started libpod-conmon-4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765.scope.
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.223 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404484.22347, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.224 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.226 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.230 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.234 233728 INFO nova.virt.libvirt.driver [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance spawned successfully.#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.234 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:24 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:21:24 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34a3874e687cdfd244282da0b63ef0ade509683f0c721b5675e985b4f2f1506/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:24 np0005539552 podman[283980]: 2025-11-29 08:21:24.152180488 +0000 UTC m=+0.038834134 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.245 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.252 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.256 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.256 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.256 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.257 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.257 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.258 233728 DEBUG nova.virt.libvirt.driver [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:24 np0005539552 podman[283980]: 2025-11-29 08:21:24.261333891 +0000 UTC m=+0.147987537 container init 4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:21:24 np0005539552 podman[283980]: 2025-11-29 08:21:24.266420057 +0000 UTC m=+0.153073643 container start 4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:21:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:24.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:24 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [NOTICE]   (284005) : New worker (284007) forked
Nov 29 03:21:24 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [NOTICE]   (284005) : Loading success.
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.291 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.291 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404484.2247818, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.291 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.320 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.324 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404484.2299888, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.324 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.329 233728 INFO nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Took 8.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.329 233728 DEBUG nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.357 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.360 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.386 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.400 233728 INFO nova.compute.manager [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Took 9.69 seconds to build instance.#033[00m
Nov 29 03:21:24 np0005539552 nova_compute[233724]: 2025-11-29 08:21:24.414 233728 DEBUG oslo_concurrency.lockutils [None req-e7b47f10-b098-4877-ae02-40860f368821 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:25.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.920 233728 DEBUG nova.compute.manager [req-c871f3ae-b627-410f-9b45-54f20e25bbf5 req-db2e35bc-6af1-46e9-ae21-4aca6eef21ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.921 233728 DEBUG oslo_concurrency.lockutils [req-c871f3ae-b627-410f-9b45-54f20e25bbf5 req-db2e35bc-6af1-46e9-ae21-4aca6eef21ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.921 233728 DEBUG oslo_concurrency.lockutils [req-c871f3ae-b627-410f-9b45-54f20e25bbf5 req-db2e35bc-6af1-46e9-ae21-4aca6eef21ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.922 233728 DEBUG oslo_concurrency.lockutils [req-c871f3ae-b627-410f-9b45-54f20e25bbf5 req-db2e35bc-6af1-46e9-ae21-4aca6eef21ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.922 233728 DEBUG nova.compute.manager [req-c871f3ae-b627-410f-9b45-54f20e25bbf5 req-db2e35bc-6af1-46e9-ae21-4aca6eef21ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.922 233728 WARNING nova.compute.manager [req-c871f3ae-b627-410f-9b45-54f20e25bbf5 req-db2e35bc-6af1-46e9-ae21-4aca6eef21ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.955 233728 DEBUG nova.compute.manager [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.956 233728 DEBUG nova.compute.manager [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing instance network info cache due to event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.956 233728 DEBUG oslo_concurrency.lockutils [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.956 233728 DEBUG oslo_concurrency.lockutils [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:25 np0005539552 nova_compute[233724]: 2025-11-29 08:21:25.957 233728 DEBUG nova.network.neutron [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:26 np0005539552 nova_compute[233724]: 2025-11-29 08:21:26.007 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:26.006 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:26.008 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:26 np0005539552 nova_compute[233724]: 2025-11-29 08:21:26.172 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:26 np0005539552 nova_compute[233724]: 2025-11-29 08:21:26.406 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:27 np0005539552 nova_compute[233724]: 2025-11-29 08:21:27.079 233728 DEBUG nova.network.neutron [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updated VIF entry in instance network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:27 np0005539552 nova_compute[233724]: 2025-11-29 08:21:27.080 233728 DEBUG nova.network.neutron [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:27 np0005539552 nova_compute[233724]: 2025-11-29 08:21:27.105 233728 DEBUG oslo_concurrency.lockutils [req-269e1dc6-f026-4d6a-8a69-c0f3d93457e2 req-871c7d43-dc04-4589-b48d-c7305c875f1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:27.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:28.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:28 np0005539552 nova_compute[233724]: 2025-11-29 08:21:28.325 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:29.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:29 np0005539552 nova_compute[233724]: 2025-11-29 08:21:29.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:29 np0005539552 nova_compute[233724]: 2025-11-29 08:21:29.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:29 np0005539552 nova_compute[233724]: 2025-11-29 08:21:29.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:29 np0005539552 nova_compute[233724]: 2025-11-29 08:21:29.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:29 np0005539552 nova_compute[233724]: 2025-11-29 08:21:29.951 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:21:29 np0005539552 nova_compute[233724]: 2025-11-29 08:21:29.952 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/509041031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.379 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.457 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.457 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.461 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.461 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.640 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.641 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4079MB free_disk=20.831424713134766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.641 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.642 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.725 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 07f760bf-6984-45e9-8e85-3d297e812553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.726 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 5004bd0f-c699-46d7-b535-b3a7db186a87 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.726 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.726 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:21:30 np0005539552 nova_compute[233724]: 2025-11-29 08:21:30.778 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1887516697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.204 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.211 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.227 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.250 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.251 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:31 np0005539552 nova_compute[233724]: 2025-11-29 08:21:31.408 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:31.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.611311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492611444, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2458, "num_deletes": 255, "total_data_size": 5533485, "memory_usage": 5623936, "flush_reason": "Manual Compaction"}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492638419, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 3625781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46521, "largest_seqno": 48974, "table_properties": {"data_size": 3615901, "index_size": 6182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21788, "raw_average_key_size": 20, "raw_value_size": 3595663, "raw_average_value_size": 3454, "num_data_blocks": 268, "num_entries": 1041, "num_filter_entries": 1041, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404297, "oldest_key_time": 1764404297, "file_creation_time": 1764404492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 27149 microseconds, and 8855 cpu microseconds.
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.638481) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 3625781 bytes OK
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.638502) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.639605) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.639617) EVENT_LOG_v1 {"time_micros": 1764404492639613, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.639652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 5522528, prev total WAL file size 5522528, number of live WAL files 2.
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.640829) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(3540KB)], [90(11MB)]
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492640878, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 15606987, "oldest_snapshot_seqno": -1}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8188 keys, 13656440 bytes, temperature: kUnknown
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492734753, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 13656440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13599949, "index_size": 34939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 211507, "raw_average_key_size": 25, "raw_value_size": 13451845, "raw_average_value_size": 1642, "num_data_blocks": 1376, "num_entries": 8188, "num_filter_entries": 8188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.735028) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13656440 bytes
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.736827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.1 rd, 145.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 8715, records dropped: 527 output_compression: NoCompression
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.736852) EVENT_LOG_v1 {"time_micros": 1764404492736838, "job": 56, "event": "compaction_finished", "compaction_time_micros": 93949, "compaction_time_cpu_micros": 28559, "output_level": 6, "num_output_files": 1, "total_output_size": 13656440, "num_input_records": 8715, "num_output_records": 8188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492737581, "job": 56, "event": "table_file_deletion", "file_number": 92}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492739849, "job": 56, "event": "table_file_deletion", "file_number": 90}
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.640719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.739932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.739937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.739939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.739940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:32.739942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:21:34.010 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:34.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:34 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Nov 29 03:21:35 np0005539552 nova_compute[233724]: 2025-11-29 08:21:35.253 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:35 np0005539552 nova_compute[233724]: 2025-11-29 08:21:35.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539552 nova_compute[233724]: 2025-11-29 08:21:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539552 nova_compute[233724]: 2025-11-29 08:21:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539552 nova_compute[233724]: 2025-11-29 08:21:35.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539552 nova_compute[233724]: 2025-11-29 08:21:35.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:21:36 np0005539552 nova_compute[233724]: 2025-11-29 08:21:36.180 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:36 np0005539552 nova_compute[233724]: 2025-11-29 08:21:36.411 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539552 nova_compute[233724]: 2025-11-29 08:21:36.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539552 nova_compute[233724]: 2025-11-29 08:21:36.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:37 np0005539552 nova_compute[233724]: 2025-11-29 08:21:37.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:37 np0005539552 nova_compute[233724]: 2025-11-29 08:21:37.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:21:38 np0005539552 podman[284098]: 2025-11-29 08:21:38.193770207 +0000 UTC m=+0.063886928 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:21:38 np0005539552 podman[284097]: 2025-11-29 08:21:38.19837941 +0000 UTC m=+0.068557403 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:21:38 np0005539552 podman[284099]: 2025-11-29 08:21:38.225658023 +0000 UTC m=+0.093775700 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:21:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:38 np0005539552 nova_compute[233724]: 2025-11-29 08:21:38.306 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:38 np0005539552 nova_compute[233724]: 2025-11-29 08:21:38.306 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:38 np0005539552 nova_compute[233724]: 2025-11-29 08:21:38.306 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:21:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:21:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1398087224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:21:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:21:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1398087224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:21:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:39Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:25:11 10.100.0.14
Nov 29 03:21:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:39Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:25:11 10.100.0.14
Nov 29 03:21:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:39.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:40 np0005539552 nova_compute[233724]: 2025-11-29 08:21:40.801 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:40 np0005539552 nova_compute[233724]: 2025-11-29 08:21:40.832 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:40 np0005539552 nova_compute[233724]: 2025-11-29 08:21:40.832 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:21:40 np0005539552 nova_compute[233724]: 2025-11-29 08:21:40.832 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:41 np0005539552 nova_compute[233724]: 2025-11-29 08:21:41.186 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539552 nova_compute[233724]: 2025-11-29 08:21:41.413 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:41.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:42.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:44.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:46 np0005539552 nova_compute[233724]: 2025-11-29 08:21:46.189 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:46 np0005539552 nova_compute[233724]: 2025-11-29 08:21:46.415 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:47.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:49.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 03:21:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:51 np0005539552 nova_compute[233724]: 2025-11-29 08:21:51.194 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539552 nova_compute[233724]: 2025-11-29 08:21:51.417 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:51.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.053261) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512053293, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 491, "num_deletes": 250, "total_data_size": 563746, "memory_usage": 573688, "flush_reason": "Manual Compaction"}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512057019, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 309886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48979, "largest_seqno": 49465, "table_properties": {"data_size": 307356, "index_size": 566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7141, "raw_average_key_size": 20, "raw_value_size": 302043, "raw_average_value_size": 875, "num_data_blocks": 25, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404493, "oldest_key_time": 1764404493, "file_creation_time": 1764404512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 3793 microseconds, and 1446 cpu microseconds.
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.057053) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 309886 bytes OK
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.057072) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.058491) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.058503) EVENT_LOG_v1 {"time_micros": 1764404512058499, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.058516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 560767, prev total WAL file size 560767, number of live WAL files 2.
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.059039) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(302KB)], [93(13MB)]
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512059108, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 13966326, "oldest_snapshot_seqno": -1}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 8023 keys, 10180548 bytes, temperature: kUnknown
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512137480, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10180548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10129743, "index_size": 29637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208324, "raw_average_key_size": 25, "raw_value_size": 9989140, "raw_average_value_size": 1245, "num_data_blocks": 1154, "num_entries": 8023, "num_filter_entries": 8023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.137774) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10180548 bytes
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.140158) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.0 rd, 129.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(77.9) write-amplify(32.9) OK, records in: 8533, records dropped: 510 output_compression: NoCompression
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.140194) EVENT_LOG_v1 {"time_micros": 1764404512140181, "job": 58, "event": "compaction_finished", "compaction_time_micros": 78447, "compaction_time_cpu_micros": 27278, "output_level": 6, "num_output_files": 1, "total_output_size": 10180548, "num_input_records": 8533, "num_output_records": 8023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512140420, "job": 58, "event": "table_file_deletion", "file_number": 95}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512142543, "job": 58, "event": "table_file_deletion", "file_number": 93}
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.058854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.142628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.142634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.142637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.142641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:21:52.142643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.717 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.717 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.736 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.820 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.821 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.829 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.829 233728 INFO nova.compute.claims [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:21:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:52 np0005539552 nova_compute[233724]: 2025-11-29 08:21:52.992 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/592928129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.505 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.511 233728 DEBUG nova.compute.provider_tree [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.564 233728 DEBUG nova.scheduler.client.report [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.633 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.634 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:53.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.807 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.807 233728 DEBUG nova.network.neutron [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.869 233728 INFO nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:53 np0005539552 nova_compute[233724]: 2025-11-29 08:21:53.885 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.020 233728 DEBUG nova.policy [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1552f15deb524705a9456cbe9b54c429', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bace34c102e4d56b089fd695d324f10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.073 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.074 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.075 233728 INFO nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Creating image(s)#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.101 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.126 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.151 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.154 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.242 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.243 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.244 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.244 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.268 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.272 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:54.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.792 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.864 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] resizing rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.898 233728 DEBUG nova.network.neutron [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Successfully created port: f2132fc4-6960-41d7-ba5a-a4fdffd50d3a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.960 233728 DEBUG nova.objects.instance [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.975 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.976 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Ensure instance console log exists: /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.976 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.977 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:54 np0005539552 nova_compute[233724]: 2025-11-29 08:21:54.977 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:55.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.288 233728 DEBUG nova.network.neutron [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Successfully updated port: f2132fc4-6960-41d7-ba5a-a4fdffd50d3a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.308 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.308 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.309 233728 DEBUG nova.network.neutron [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.406 233728 DEBUG nova.compute.manager [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-changed-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.407 233728 DEBUG nova.compute.manager [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Refreshing instance network info cache due to event network-changed-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.407 233728 DEBUG oslo_concurrency.lockutils [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.420 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:56 np0005539552 nova_compute[233724]: 2025-11-29 08:21:56.901 233728 DEBUG nova.network.neutron [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:57.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:58.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.356 233728 DEBUG nova.network.neutron [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updating instance_info_cache with network_info: [{"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.374 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.375 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance network_info: |[{"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.376 233728 DEBUG oslo_concurrency.lockutils [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.377 233728 DEBUG nova.network.neutron [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Refreshing network info cache for port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.381 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Start _get_guest_xml network_info=[{"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.387 233728 WARNING nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.397 233728 DEBUG nova.virt.libvirt.host [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.398 233728 DEBUG nova.virt.libvirt.host [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.405 233728 DEBUG nova.virt.libvirt.host [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.406 233728 DEBUG nova.virt.libvirt.host [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.408 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.408 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.408 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.409 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.409 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.409 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.409 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.410 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.410 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.410 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.410 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.411 233728 DEBUG nova.virt.hardware [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.414 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2678952480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.898 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.934 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:58 np0005539552 nova_compute[233724]: 2025-11-29 08:21:58.940 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2342398711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:59Z|00517|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:21:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:21:59Z|00518|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.483 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.486 233728 DEBUG nova.virt.libvirt.vif [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1988120291',display_name='tempest-tempest.common.compute-instance-1988120291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1988120291',id=123,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJKHc7P4YQ541Hb485a6CMBwKiv24SeFFAW+eSzfZUCbyqL5VCFCZG4/k60hFCpsJ0Tv3h26Uo1xnh37nbsxuBRWgRRa38dV/cocbcLIxwndoSfRH3ORp+Lk/2eG6aanw==',key_name='tempest-keypair-406941470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-4pdufn0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=8f2e1d97-ea5c-43f8-a05f-2f531213d241,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.486 233728 DEBUG nova.network.os_vif_util [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.488 233728 DEBUG nova.network.os_vif_util [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.489 233728 DEBUG nova.objects.instance [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.517 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <uuid>8f2e1d97-ea5c-43f8-a05f-2f531213d241</uuid>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <name>instance-0000007b</name>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:name>tempest-tempest.common.compute-instance-1988120291</nova:name>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:21:58</nova:creationTime>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <nova:port uuid="f2132fc4-6960-41d7-ba5a-a4fdffd50d3a">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <entry name="serial">8f2e1d97-ea5c-43f8-a05f-2f531213d241</entry>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <entry name="uuid">8f2e1d97-ea5c-43f8-a05f-2f531213d241</entry>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:18:89:66"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <target dev="tapf2132fc4-69"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/console.log" append="off"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:21:59 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:21:59 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:21:59 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:21:59 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.519 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Preparing to wait for external event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.520 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.520 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.520 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.521 233728 DEBUG nova.virt.libvirt.vif [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1988120291',display_name='tempest-tempest.common.compute-instance-1988120291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1988120291',id=123,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJKHc7P4YQ541Hb485a6CMBwKiv24SeFFAW+eSzfZUCbyqL5VCFCZG4/k60hFCpsJ0Tv3h26Uo1xnh37nbsxuBRWgRRa38dV/cocbcLIxwndoSfRH3ORp+Lk/2eG6aanw==',key_name='tempest-keypair-406941470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-4pdufn0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=8f2e1d97-ea5c-43f8-a05f-2f531213d241,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.522 233728 DEBUG nova.network.os_vif_util [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.523 233728 DEBUG nova.network.os_vif_util [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.523 233728 DEBUG os_vif [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.524 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.525 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.525 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.529 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.530 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2132fc4-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.530 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2132fc4-69, col_values=(('external_ids', {'iface-id': 'f2132fc4-6960-41d7-ba5a-a4fdffd50d3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:89:66', 'vm-uuid': '8f2e1d97-ea5c-43f8-a05f-2f531213d241'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:59 np0005539552 NetworkManager[48926]: <info>  [1764404519.5370] manager: (tapf2132fc4-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.535 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.538 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.554 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.555 233728 INFO os_vif [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69')#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.626 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.627 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.627 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:18:89:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.630 233728 INFO nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Using config drive#033[00m
Nov 29 03:21:59 np0005539552 nova_compute[233724]: 2025-11-29 08:21:59.652 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:21:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:59.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.331 233728 INFO nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Creating config drive at /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config#033[00m
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.338 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa31z27x7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.472 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa31z27x7" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.497 233728 DEBUG nova.storage.rbd_utils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.500 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:00.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.757 233728 DEBUG oslo_concurrency.processutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.757 233728 INFO nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deleting local config drive /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config because it was imported into RBD.#033[00m
Nov 29 03:22:00 np0005539552 NetworkManager[48926]: <info>  [1764404520.8063] manager: (tapf2132fc4-69): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Nov 29 03:22:00 np0005539552 kernel: tapf2132fc4-69: entered promiscuous mode
Nov 29 03:22:00 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:00Z|00519|binding|INFO|Claiming lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for this chassis.
Nov 29 03:22:00 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:00Z|00520|binding|INFO|f2132fc4-6960-41d7-ba5a-a4fdffd50d3a: Claiming fa:16:3e:18:89:66 10.100.0.4
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.810 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:00Z|00521|binding|INFO|Setting lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a ovn-installed in OVS
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.825 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539552 nova_compute[233724]: 2025-11-29 08:22:00.829 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539552 systemd-udevd[284760]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:00 np0005539552 systemd-machined[196379]: New machine qemu-51-instance-0000007b.
Nov 29 03:22:00 np0005539552 systemd[1]: Started Virtual Machine qemu-51-instance-0000007b.
Nov 29 03:22:00 np0005539552 NetworkManager[48926]: <info>  [1764404520.8593] device (tapf2132fc4-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:00 np0005539552 NetworkManager[48926]: <info>  [1764404520.8606] device (tapf2132fc4-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:01 np0005539552 nova_compute[233724]: 2025-11-29 08:22:01.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:01 np0005539552 nova_compute[233724]: 2025-11-29 08:22:01.573 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404521.5733767, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:01 np0005539552 nova_compute[233724]: 2025-11-29 08:22:01.574 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:01.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:02Z|00522|binding|INFO|Setting lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a up in Southbound
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.409 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:89:66 10.100.0.4'], port_security=['fa:16:3e:18:89:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f2e1d97-ea5c-43f8-a05f-2f531213d241', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8800adc1-baf2-4222-bbe6-bd173edc1243', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.411 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.412 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.430 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fce08016-c4e0-48e5-9a68-f84d11eb8193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.469 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4ce439-326c-436f-9e69-2db747560640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.475 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5b0e3f-233b-485e-a9a6-84bfbf4e8798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.477 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.481 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404521.5736318, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.481 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.501 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[975bd29a-27d2-44f3-90c3-d6f9ba09568d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.520 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d309935a-ffac-41ef-bd17-c1107db0da40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284816, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.526 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.530 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.541 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8ced02-cef4-485c-a6a5-30fcb9b5793a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747491, 'tstamp': 747491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284817, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747494, 'tstamp': 747494}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284817, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.543 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.544 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.545 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.546 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.546 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.547 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:02 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:02.547 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.586 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:02.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.878 233728 DEBUG nova.network.neutron [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updated VIF entry in instance network info cache for port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:02 np0005539552 nova_compute[233724]: 2025-11-29 08:22:02.879 233728 DEBUG nova.network.neutron [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updating instance_info_cache with network_info: [{"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:03 np0005539552 nova_compute[233724]: 2025-11-29 08:22:03.655 233728 DEBUG oslo_concurrency.lockutils [req-e9ddb5fd-ff22-4329-85a8-8bbb24b7af8b req-f7e09a62-0cc4-4a91-81a0-abea1db81097 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:04 np0005539552 nova_compute[233724]: 2025-11-29 08:22:04.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:04.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:04 np0005539552 nova_compute[233724]: 2025-11-29 08:22:04.917 233728 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:04 np0005539552 nova_compute[233724]: 2025-11-29 08:22:04.917 233728 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:04 np0005539552 nova_compute[233724]: 2025-11-29 08:22:04.918 233728 DEBUG nova.network.neutron [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.046 233728 DEBUG nova.compute.manager [req-474d4e1f-1a44-4309-a231-ea68c14ac6de req-59a65470-4c9f-46db-ac8f-f0a6819bb95b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.046 233728 DEBUG oslo_concurrency.lockutils [req-474d4e1f-1a44-4309-a231-ea68c14ac6de req-59a65470-4c9f-46db-ac8f-f0a6819bb95b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.046 233728 DEBUG oslo_concurrency.lockutils [req-474d4e1f-1a44-4309-a231-ea68c14ac6de req-59a65470-4c9f-46db-ac8f-f0a6819bb95b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.047 233728 DEBUG oslo_concurrency.lockutils [req-474d4e1f-1a44-4309-a231-ea68c14ac6de req-59a65470-4c9f-46db-ac8f-f0a6819bb95b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.047 233728 DEBUG nova.compute.manager [req-474d4e1f-1a44-4309-a231-ea68c14ac6de req-59a65470-4c9f-46db-ac8f-f0a6819bb95b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Processing event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.048 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.051 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404525.05153, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.052 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.053 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.058 233728 INFO nova.virt.libvirt.driver [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance spawned successfully.#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.059 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.114 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.114 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.114 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.115 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.115 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.116 233728 DEBUG nova.virt.libvirt.driver [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.120 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.124 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.306 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.337 233728 INFO nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Took 11.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.337 233728 DEBUG nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.451 233728 INFO nova.compute.manager [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Took 12.67 seconds to build instance.#033[00m
Nov 29 03:22:05 np0005539552 nova_compute[233724]: 2025-11-29 08:22:05.476 233728 DEBUG oslo_concurrency.lockutils [None req-859ab465-de2e-4b2f-a93e-3f000c517fa7 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:05.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:06 np0005539552 nova_compute[233724]: 2025-11-29 08:22:06.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:06.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:07 np0005539552 nova_compute[233724]: 2025-11-29 08:22:07.152 233728 DEBUG nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:07 np0005539552 nova_compute[233724]: 2025-11-29 08:22:07.153 233728 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:07 np0005539552 nova_compute[233724]: 2025-11-29 08:22:07.153 233728 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:07 np0005539552 nova_compute[233724]: 2025-11-29 08:22:07.153 233728 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:07 np0005539552 nova_compute[233724]: 2025-11-29 08:22:07.154 233728 DEBUG nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:07 np0005539552 nova_compute[233724]: 2025-11-29 08:22:07.154 233728 WARNING nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received unexpected event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:22:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:07.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.262 233728 DEBUG nova.network.neutron [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.284 233728 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.379 233728 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.380 233728 DEBUG nova.virt.libvirt.volume.remotefs [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Creating file /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/0c6312933c134f7bbaa46eaa45170111.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.380 233728 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/0c6312933c134f7bbaa46eaa45170111.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:08 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.829 233728 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/0c6312933c134f7bbaa46eaa45170111.tmp" returned: 1 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.830 233728 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/0c6312933c134f7bbaa46eaa45170111.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.831 233728 DEBUG nova.virt.libvirt.volume.remotefs [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Creating directory /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:22:08 np0005539552 nova_compute[233724]: 2025-11-29 08:22:08.831 233728 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:08 np0005539552 podman[284825]: 2025-11-29 08:22:08.993214454 +0000 UTC m=+0.065957335 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:22:09 np0005539552 podman[284824]: 2025-11-29 08:22:09.021535726 +0000 UTC m=+0.097087693 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:22:09 np0005539552 podman[284826]: 2025-11-29 08:22:09.037930078 +0000 UTC m=+0.104488083 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.044 233728 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.049 233728 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.123 233728 DEBUG nova.compute.manager [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-changed-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.124 233728 DEBUG nova.compute.manager [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Refreshing instance network info cache due to event network-changed-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.124 233728 DEBUG oslo_concurrency.lockutils [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.124 233728 DEBUG oslo_concurrency.lockutils [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.125 233728 DEBUG nova.network.neutron [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Refreshing network info cache for port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:09 np0005539552 nova_compute[233724]: 2025-11-29 08:22:09.540 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 03:22:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:09.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:10.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:10 np0005539552 nova_compute[233724]: 2025-11-29 08:22:10.686 233728 DEBUG nova.network.neutron [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updated VIF entry in instance network info cache for port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:10 np0005539552 nova_compute[233724]: 2025-11-29 08:22:10.686 233728 DEBUG nova.network.neutron [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updating instance_info_cache with network_info: [{"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:10 np0005539552 nova_compute[233724]: 2025-11-29 08:22:10.703 233728 DEBUG oslo_concurrency.lockutils [req-45f8e7e1-7e46-4703-b56c-0190dec15d61 req-e60352fb-f9d1-4a8c-86fd-fa10ca8e5c6a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8f2e1d97-ea5c-43f8-a05f-2f531213d241" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:11 np0005539552 kernel: tapd5e26252-e0 (unregistering): left promiscuous mode
Nov 29 03:22:11 np0005539552 NetworkManager[48926]: <info>  [1764404531.3672] device (tapd5e26252-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:11Z|00523|binding|INFO|Releasing lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 from this chassis (sb_readonly=0)
Nov 29 03:22:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:11Z|00524|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 down in Southbound
Nov 29 03:22:11 np0005539552 nova_compute[233724]: 2025-11-29 08:22:11.375 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:11Z|00525|binding|INFO|Removing iface tapd5e26252-e0 ovn-installed in OVS
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.383 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:25:11 10.100.0.14'], port_security=['fa:16:3e:47:25:11 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5004bd0f-c699-46d7-b535-b3a7db186a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:11 np0005539552 nova_compute[233724]: 2025-11-29 08:22:11.384 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.384 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.386 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.387 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f06ccf73-3248-4758-a677-7b2dc9e5ee2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.392 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:22:11 np0005539552 nova_compute[233724]: 2025-11-29 08:22:11.396 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539552 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 29 03:22:11 np0005539552 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000079.scope: Consumed 16.171s CPU time.
Nov 29 03:22:11 np0005539552 systemd-machined[196379]: Machine qemu-50-instance-00000079 terminated.
Nov 29 03:22:11 np0005539552 nova_compute[233724]: 2025-11-29 08:22:11.425 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [NOTICE]   (284005) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:11 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [NOTICE]   (284005) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:11 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [WARNING]  (284005) : Exiting Master process...
Nov 29 03:22:11 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [ALERT]    (284005) : Current worker (284007) exited with code 143 (Terminated)
Nov 29 03:22:11 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[284001]: [WARNING]  (284005) : All workers exited. Exiting... (0)
Nov 29 03:22:11 np0005539552 systemd[1]: libpod-4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765.scope: Deactivated successfully.
Nov 29 03:22:11 np0005539552 podman[284911]: 2025-11-29 08:22:11.535140676 +0000 UTC m=+0.044467997 container died 4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:22:11 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:11 np0005539552 systemd[1]: var-lib-containers-storage-overlay-a34a3874e687cdfd244282da0b63ef0ade509683f0c721b5675e985b4f2f1506-merged.mount: Deactivated successfully.
Nov 29 03:22:11 np0005539552 podman[284911]: 2025-11-29 08:22:11.577553027 +0000 UTC m=+0.086880348 container cleanup 4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:11 np0005539552 systemd[1]: libpod-conmon-4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765.scope: Deactivated successfully.
Nov 29 03:22:11 np0005539552 podman[284943]: 2025-11-29 08:22:11.645754012 +0000 UTC m=+0.041467706 container remove 4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.651 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5b4955-a78e-4a8a-858e-f6e3c6e821e4]: (4, ('Sat Nov 29 08:22:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765)\n4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765\nSat Nov 29 08:22:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765)\n4534ff03db005b8f9674dfd055c71d71e147122d2547c8d449164b6d7f4b1765\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.653 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[991d8278-6fb0-49cc-b656-cb16b956f9c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.654 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:11 np0005539552 nova_compute[233724]: 2025-11-29 08:22:11.656 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539552 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:22:11 np0005539552 nova_compute[233724]: 2025-11-29 08:22:11.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.678 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6773151a-87a3-4a52-9dcb-877caccf0d45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.695 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b352839-b036-48a1-8837-be050d151e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.696 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[76f138ae-ee49-40c1-952f-2f8e294b36c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:11.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.715 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb76f3e-ed2d-4d20-b5d7-3110a8a5e8ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752920, 'reachable_time': 39071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284969, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:11 np0005539552 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.720 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:11.720 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cb15cd-89af-4c1a-bfd0-c8bb2d05a0bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.067 233728 INFO nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.074 233728 INFO nova.virt.libvirt.driver [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance destroyed successfully.#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.076 233728 DEBUG nova.virt.libvirt.vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.077 233728 DEBUG nova.network.os_vif_util [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.078 233728 DEBUG nova.network.os_vif_util [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.079 233728 DEBUG os_vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.082 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.082 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e26252-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.084 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.090 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.094 233728 INFO os_vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0')#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.099 233728 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.099 233728 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.328 233728 DEBUG neutronclient.v2_0.client [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.430 233728 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.431 233728 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.431 233728 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:12.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:12 np0005539552 nova_compute[233724]: 2025-11-29 08:22:12.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.237 233728 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.238 233728 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.238 233728 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.238 233728 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.238 233728 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.238 233728 WARNING nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.239 233728 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.239 233728 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.239 233728 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.239 233728 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.239 233728 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.239 233728 WARNING nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.581 233728 DEBUG nova.compute.manager [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.582 233728 DEBUG nova.compute.manager [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing instance network info cache due to event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.582 233728 DEBUG oslo_concurrency.lockutils [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.582 233728 DEBUG oslo_concurrency.lockutils [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:13 np0005539552 nova_compute[233724]: 2025-11-29 08:22:13.582 233728 DEBUG nova.network.neutron [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:13.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:15 np0005539552 nova_compute[233724]: 2025-11-29 08:22:15.463 233728 DEBUG nova.network.neutron [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updated VIF entry in instance network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:15 np0005539552 nova_compute[233724]: 2025-11-29 08:22:15.464 233728 DEBUG nova.network.neutron [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 03:22:15 np0005539552 nova_compute[233724]: 2025-11-29 08:22:15.479 233728 DEBUG oslo_concurrency.lockutils [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:15.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:16 np0005539552 nova_compute[233724]: 2025-11-29 08:22:16.288 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539552 nova_compute[233724]: 2025-11-29 08:22:16.427 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:16.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.085 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.424 233728 DEBUG nova.compute.manager [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.424 233728 DEBUG oslo_concurrency.lockutils [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.425 233728 DEBUG oslo_concurrency.lockutils [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.425 233728 DEBUG oslo_concurrency.lockutils [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.425 233728 DEBUG nova.compute.manager [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:17 np0005539552 nova_compute[233724]: 2025-11-29 08:22:17.426 233728 WARNING nova.compute.manager [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:22:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:17.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:19.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:20 np0005539552 nova_compute[233724]: 2025-11-29 08:22:20.418 233728 DEBUG nova.compute.manager [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:20 np0005539552 nova_compute[233724]: 2025-11-29 08:22:20.419 233728 DEBUG oslo_concurrency.lockutils [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:20 np0005539552 nova_compute[233724]: 2025-11-29 08:22:20.419 233728 DEBUG oslo_concurrency.lockutils [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:20 np0005539552 nova_compute[233724]: 2025-11-29 08:22:20.419 233728 DEBUG oslo_concurrency.lockutils [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:20 np0005539552 nova_compute[233724]: 2025-11-29 08:22:20.419 233728 DEBUG nova.compute.manager [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:20 np0005539552 nova_compute[233724]: 2025-11-29 08:22:20.420 233728 WARNING nova.compute.manager [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:22:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:20.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:20.628 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:20.629 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:20.629 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:21Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:89:66 10.100.0.4
Nov 29 03:22:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:21Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:89:66 10.100.0.4
Nov 29 03:22:21 np0005539552 nova_compute[233724]: 2025-11-29 08:22:21.432 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:21.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:22 np0005539552 nova_compute[233724]: 2025-11-29 08:22:22.087 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:23.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.275 233728 DEBUG nova.compute.manager [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.275 233728 DEBUG oslo_concurrency.lockutils [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.275 233728 DEBUG oslo_concurrency.lockutils [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.276 233728 DEBUG oslo_concurrency.lockutils [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.276 233728 DEBUG nova.compute.manager [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.276 233728 WARNING nova.compute.manager [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:25 np0005539552 nova_compute[233724]: 2025-11-29 08:22:25.990 233728 INFO nova.compute.manager [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Swapping old allocation on dict_keys(['29c97280-aaf3-4c7f-a78a-1c9e8d025371']) held by migration fa6c73cf-52c4-43d6-af42-5901aa2f0931 for instance#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.015 233728 DEBUG nova.scheduler.client.report [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Overwriting current allocation {'allocations': {'a73c606e-2495-4af4-b703-8d4b3001fdf5': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 81}}, 'project_id': '26e3508b949a4dbf960d7befc8f27869', 'user_id': '80ceb9112b3a4f119c05f21fd617af11', 'consumer_generation': 1} on consumer 5004bd0f-c699-46d7-b535-b3a7db186a87 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.231 233728 INFO nova.network.neutron [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.435 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.596 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404531.5951211, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.597 233728 INFO nova.compute.manager [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.874 233728 DEBUG nova.compute.manager [None req-f6c59285-fbf7-4592-9c9d-6c455c33a28a - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.878 233728 DEBUG nova.compute.manager [None req-f6c59285-fbf7-4592-9c9d-6c455c33a28a - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:26 np0005539552 nova_compute[233724]: 2025-11-29 08:22:26.895 233728 INFO nova.compute.manager [None req-f6c59285-fbf7-4592-9c9d-6c455c33a28a - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.089 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.298 233728 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.298 233728 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.298 233728 DEBUG nova.network.neutron [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.672 233728 DEBUG nova.compute.manager [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.672 233728 DEBUG oslo_concurrency.lockutils [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.673 233728 DEBUG oslo_concurrency.lockutils [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.673 233728 DEBUG oslo_concurrency.lockutils [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.673 233728 DEBUG nova.compute.manager [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:27 np0005539552 nova_compute[233724]: 2025-11-29 08:22:27.673 233728 WARNING nova.compute.manager [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:27.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:28 np0005539552 nova_compute[233724]: 2025-11-29 08:22:28.644 233728 DEBUG nova.network.neutron [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:28 np0005539552 nova_compute[233724]: 2025-11-29 08:22:28.678 233728 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:28 np0005539552 nova_compute[233724]: 2025-11-29 08:22:28.678 233728 DEBUG nova.virt.libvirt.driver [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 03:22:28 np0005539552 nova_compute[233724]: 2025-11-29 08:22:28.755 233728 DEBUG nova.storage.rbd_utils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rolling back rbd image(5004bd0f-c699-46d7-b535-b3a7db186a87_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 03:22:28 np0005539552 nova_compute[233724]: 2025-11-29 08:22:28.871 233728 DEBUG nova.storage.rbd_utils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] removing snapshot(nova-resize) on rbd image(5004bd0f-c699-46d7-b535-b3a7db186a87_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:22:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.652 233728 DEBUG nova.virt.libvirt.driver [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start _get_guest_xml network_info=[{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.656 233728 WARNING nova.virt.libvirt.driver [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.660 233728 DEBUG nova.virt.libvirt.host [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.661 233728 DEBUG nova.virt.libvirt.host [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.663 233728 DEBUG nova.virt.libvirt.host [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.664 233728 DEBUG nova.virt.libvirt.host [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.665 233728 DEBUG nova.virt.libvirt.driver [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.665 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.666 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.666 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.666 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.666 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.666 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.667 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.667 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.667 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.667 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.667 233728 DEBUG nova.virt.hardware [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.668 233728 DEBUG nova.objects.instance [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.683 233728 DEBUG oslo_concurrency.processutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:29.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.770 233728 DEBUG nova.compute.manager [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.771 233728 DEBUG nova.compute.manager [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing instance network info cache due to event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.771 233728 DEBUG oslo_concurrency.lockutils [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.772 233728 DEBUG oslo_concurrency.lockutils [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:29 np0005539552 nova_compute[233724]: 2025-11-29 08:22:29.772 233728 DEBUG nova.network.neutron [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/251863706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.107 233728 DEBUG oslo_concurrency.processutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.148 233728 DEBUG oslo_concurrency.processutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1115448986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.598 233728 DEBUG oslo_concurrency.processutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.600 233728 DEBUG nova.virt.libvirt.vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.600 233728 DEBUG nova.network.os_vif_util [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.601 233728 DEBUG nova.network.os_vif_util [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.605 233728 DEBUG nova.virt.libvirt.driver [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <uuid>5004bd0f-c699-46d7-b535-b3a7db186a87</uuid>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <name>instance-00000079</name>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestJSON-server-1253218278</nova:name>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:22:29</nova:creationTime>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <nova:port uuid="d5e26252-e0d3-4a6b-8b18-b2f4cb7db432">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <entry name="serial">5004bd0f-c699-46d7-b535-b3a7db186a87</entry>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <entry name="uuid">5004bd0f-c699-46d7-b535-b3a7db186a87</entry>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/5004bd0f-c699-46d7-b535-b3a7db186a87_disk">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:47:25:11"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <target dev="tapd5e26252-e0"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/console.log" append="off"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:22:30 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:22:30 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:22:30 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:22:30 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.606 233728 DEBUG nova.compute.manager [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Preparing to wait for external event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.607 233728 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.607 233728 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.608 233728 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.608 233728 DEBUG nova.virt.libvirt.vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.609 233728 DEBUG nova.network.os_vif_util [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.609 233728 DEBUG nova.network.os_vif_util [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.610 233728 DEBUG os_vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.610 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.611 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.612 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.614 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.614 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e26252-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.615 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5e26252-e0, col_values=(('external_ids', {'iface-id': 'd5e26252-e0d3-4a6b-8b18-b2f4cb7db432', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:25:11', 'vm-uuid': '5004bd0f-c699-46d7-b535-b3a7db186a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539552 NetworkManager[48926]: <info>  [1764404550.6177] manager: (tapd5e26252-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.620 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.622 233728 INFO os_vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0')#033[00m
Nov 29 03:22:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:30 np0005539552 kernel: tapd5e26252-e0: entered promiscuous mode
Nov 29 03:22:30 np0005539552 NetworkManager[48926]: <info>  [1764404550.7096] manager: (tapd5e26252-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Nov 29 03:22:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:30Z|00526|binding|INFO|Claiming lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for this chassis.
Nov 29 03:22:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:30Z|00527|binding|INFO|d5e26252-e0d3-4a6b-8b18-b2f4cb7db432: Claiming fa:16:3e:47:25:11 10.100.0.14
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.710 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.717 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:25:11 10.100.0.14'], port_security=['fa:16:3e:47:25:11 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5004bd0f-c699-46d7-b535-b3a7db186a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.718 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.720 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:22:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:30Z|00528|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 ovn-installed in OVS
Nov 29 03:22:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:30Z|00529|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 up in Southbound
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.726 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.729 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.733 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebe8e32-4218-4b4a-a7ce-ff3448bf88c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.734 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.735 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.736 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[42bab197-ca13-46b6-8ee8-a6cfa016df37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.736 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf4f518-4597-4657-8f21-3a7bcbc806a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 systemd-machined[196379]: New machine qemu-52-instance-00000079.
Nov 29 03:22:30 np0005539552 systemd-udevd[285163]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.750 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[39a12d9b-e966-4556-b5dc-631b8170362d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 NetworkManager[48926]: <info>  [1764404550.7650] device (tapd5e26252-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:30 np0005539552 NetworkManager[48926]: <info>  [1764404550.7658] device (tapd5e26252-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:30 np0005539552 systemd[1]: Started Virtual Machine qemu-52-instance-00000079.
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.768 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3deed510-5699-4457-a55d-cf60e60d3364]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.798 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5b80697c-499d-4dd6-96f9-b2691e75f754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.806 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[747f9f4d-a6d9-42dd-afc2-b01679d74295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 NetworkManager[48926]: <info>  [1764404550.8084] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.840 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc10099-359c-413d-8b8f-1470cecb0257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.843 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[50086bd9-492b-428e-aae0-f0ef59f0e7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 NetworkManager[48926]: <info>  [1764404550.8685] device (tap58fd104d-40): carrier: link connected
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.875 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[eff19f07-2c56-432b-9a25-3d99d14adfbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.891 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b12d788-099b-4a2a-a567-362fcec1c3f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759655, 'reachable_time': 31989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285194, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.907 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e5abdf7b-3947-47aa-8deb-2263519f07dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 759655, 'tstamp': 759655}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285195, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 nova_compute[233724]: 2025-11-29 08:22:30.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.926 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7df858df-0046-4ffd-8565-d17c9021e26e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759655, 'reachable_time': 31989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285196, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:30.951 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcbcc50-76af-4b22-b91e-1f4766e2716f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.005 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5793966-fe8d-40a7-9b6b-3f195b1ef7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.006 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.007 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.007 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.007 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.007 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.007 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:31 np0005539552 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:22:31 np0005539552 NetworkManager[48926]: <info>  [1764404551.0097] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.011 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.011 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.012 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:31 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:31Z|00530|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.027 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.028 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d4691191-6e83-4e53-a52f-35cf538f72b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.029 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.031 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.040 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.381 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404551.3809726, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.382 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.400 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.404 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404551.3817308, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.404 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.427 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:31 np0005539552 podman[285289]: 2025-11-29 08:22:31.430789237 +0000 UTC m=+0.057622232 container create 0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.431 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.441 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1054651002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.465 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.466 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:31 np0005539552 systemd[1]: Started libpod-conmon-0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03.scope.
Nov 29 03:22:31 np0005539552 podman[285289]: 2025-11-29 08:22:31.400561774 +0000 UTC m=+0.027394789 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:22:31 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:22:31 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ad69dc2432068cfb6e28623f27193671f5e0657e09ed4f979f632890b54ec8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:31.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:31 np0005539552 podman[285289]: 2025-11-29 08:22:31.725492446 +0000 UTC m=+0.352325521 container init 0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:31 np0005539552 podman[285289]: 2025-11-29 08:22:31.730938823 +0000 UTC m=+0.357771848 container start 0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:22:31 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [NOTICE]   (285311) : New worker (285313) forked
Nov 29 03:22:31 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [NOTICE]   (285311) : Loading success.
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.833 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.835 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:31.856 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.857 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.857 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.860 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.860 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.862 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.863 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.873 233728 DEBUG nova.network.neutron [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updated VIF entry in instance network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.874 233728 DEBUG nova.network.neutron [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.898 233728 DEBUG oslo_concurrency.lockutils [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.935 233728 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.935 233728 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.935 233728 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.935 233728 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.936 233728 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Processing event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.936 233728 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.936 233728 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.936 233728 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.936 233728 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.937 233728 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.937 233728 WARNING nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.937 233728 DEBUG nova.compute.manager [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.940 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404551.9405153, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.941 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.944 233728 INFO nova.virt.libvirt.driver [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance running successfully.#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.945 233728 DEBUG nova.virt.libvirt.driver [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.974 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:31 np0005539552 nova_compute[233724]: 2025-11-29 08:22:31.977 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.024 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.043 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.044 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3866MB free_disk=20.795360565185547GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.044 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.044 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.069 233728 INFO nova.compute.manager [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance to original state: 'active'#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.157 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 07f760bf-6984-45e9-8e85-3d297e812553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.157 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.157 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 5004bd0f-c699-46d7-b535-b3a7db186a87 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.158 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.158 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.247 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:32.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2924993727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.731 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.736 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.752 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.775 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:22:32 np0005539552 nova_compute[233724]: 2025-11-29 08:22:32.776 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.160 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.160 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.161 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.161 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.161 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.163 233728 INFO nova.compute.manager [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Terminating instance#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.164 233728 DEBUG nova.compute.manager [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:35 np0005539552 kernel: tapd5e26252-e0 (unregistering): left promiscuous mode
Nov 29 03:22:35 np0005539552 NetworkManager[48926]: <info>  [1764404555.2056] device (tapd5e26252-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.217 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:35Z|00531|binding|INFO|Releasing lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 from this chassis (sb_readonly=0)
Nov 29 03:22:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:35Z|00532|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 down in Southbound
Nov 29 03:22:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:35Z|00533|binding|INFO|Removing iface tapd5e26252-e0 ovn-installed in OVS
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.221 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.224 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:25:11 10.100.0.14'], port_security=['fa:16:3e:47:25:11 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5004bd0f-c699-46d7-b535-b3a7db186a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.228 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.230 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.232 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccbc45d-66e6-4970-ac08-fbfd5047375e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.232 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.239 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 29 03:22:35 np0005539552 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000079.scope: Consumed 4.047s CPU time.
Nov 29 03:22:35 np0005539552 systemd-machined[196379]: Machine qemu-52-instance-00000079 terminated.
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.386 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [NOTICE]   (285311) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:35 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [NOTICE]   (285311) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:35 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [WARNING]  (285311) : Exiting Master process...
Nov 29 03:22:35 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [ALERT]    (285311) : Current worker (285313) exited with code 143 (Terminated)
Nov 29 03:22:35 np0005539552 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[285306]: [WARNING]  (285311) : All workers exited. Exiting... (0)
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 systemd[1]: libpod-0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03.scope: Deactivated successfully.
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.394 233728 INFO nova.virt.libvirt.driver [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance destroyed successfully.#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.394 233728 DEBUG nova.objects.instance [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:35 np0005539552 podman[285371]: 2025-11-29 08:22:35.398368527 +0000 UTC m=+0.055354880 container died 0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.410 233728 DEBUG nova.virt.libvirt.vif [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.410 233728 DEBUG nova.network.os_vif_util [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.411 233728 DEBUG nova.network.os_vif_util [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.412 233728 DEBUG os_vif [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.417 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.417 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e26252-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.419 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:35 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.424 233728 INFO os_vif [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0')#033[00m
Nov 29 03:22:35 np0005539552 systemd[1]: var-lib-containers-storage-overlay-95ad69dc2432068cfb6e28623f27193671f5e0657e09ed4f979f632890b54ec8-merged.mount: Deactivated successfully.
Nov 29 03:22:35 np0005539552 podman[285371]: 2025-11-29 08:22:35.436217595 +0000 UTC m=+0.093203938 container cleanup 0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:22:35 np0005539552 systemd[1]: libpod-conmon-0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03.scope: Deactivated successfully.
Nov 29 03:22:35 np0005539552 podman[285423]: 2025-11-29 08:22:35.496255411 +0000 UTC m=+0.038433775 container remove 0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.502 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[38ecc340-4701-4997-8500-53d36600192d]: (4, ('Sat Nov 29 08:22:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03)\n0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03\nSat Nov 29 08:22:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03)\n0f5d7dc496b58853c04cc44fda3d9a1d56704ae8716471391f7ce46539f36b03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.504 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4145a07c-9321-4390-b5e1-73202fb4fca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.505 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.521 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.524 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3984ed95-304f-4434-a9c9-f7e86a6655d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f93a5704-efae-4109-abb8-88a55165d1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.544 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[479a38ec-2c2a-464d-9a43-ba516c4b810b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.559 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbc45f9-83f4-467b-ad94-c9fe31b7374d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759648, 'reachable_time': 33136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285440, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.561 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:35.561 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e833fd37-8624-430c-941e-4afff14e38d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:35 np0005539552 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:22:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.776 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.829 233728 INFO nova.virt.libvirt.driver [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Deleting instance files /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87_del#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.830 233728 INFO nova.virt.libvirt.driver [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Deletion of /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87_del complete#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.878 233728 INFO nova.compute.manager [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.878 233728 DEBUG oslo.service.loopingcall [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.878 233728 DEBUG nova.compute.manager [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.879 233728 DEBUG nova.network.neutron [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:35 np0005539552 nova_compute[233724]: 2025-11-29 08:22:35.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.147 233728 DEBUG nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.147 233728 DEBUG oslo_concurrency.lockutils [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.147 233728 DEBUG oslo_concurrency.lockutils [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.148 233728 DEBUG oslo_concurrency.lockutils [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.148 233728 DEBUG nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.148 233728 DEBUG nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.148 233728 DEBUG nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.149 233728 DEBUG oslo_concurrency.lockutils [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.149 233728 DEBUG oslo_concurrency.lockutils [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.149 233728 DEBUG oslo_concurrency.lockutils [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.149 233728 DEBUG nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.149 233728 WARNING nova.compute.manager [req-bb5f0d86-be5c-4450-9f87-e9d7647d7a62 req-0c77eb70-6511-46a2-af62-6d51dd6af2ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.438 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.756 233728 DEBUG nova.network.neutron [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.810 233728 INFO nova.compute.manager [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.885 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.885 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:36 np0005539552 nova_compute[233724]: 2025-11-29 08:22:36.978 233728 DEBUG oslo_concurrency.processutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 03:22:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2761378390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.427 233728 DEBUG oslo_concurrency.processutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.433 233728 DEBUG nova.compute.provider_tree [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.450 233728 DEBUG nova.scheduler.client.report [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.469 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.489 233728 INFO nova.scheduler.client.report [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Deleted allocations for instance 5004bd0f-c699-46d7-b535-b3a7db186a87#033[00m
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.561 233728 DEBUG oslo_concurrency.lockutils [None req-eb90573f-d7e2-4734-aee5-f5e7c8c32319 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:37.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:37 np0005539552 nova_compute[233724]: 2025-11-29 08:22:37.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:38 np0005539552 nova_compute[233724]: 2025-11-29 08:22:38.261 233728 DEBUG nova.compute.manager [req-462ed6bc-a2e4-4f40-a390-a43d254d55be req-34696560-4861-4e5d-8d45-edd31db81a78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-deleted-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:38Z|00534|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:22:38 np0005539552 nova_compute[233724]: 2025-11-29 08:22:38.626 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:38Z|00535|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:22:38 np0005539552 nova_compute[233724]: 2025-11-29 08:22:38.821 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:38.862 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:38 np0005539552 nova_compute[233724]: 2025-11-29 08:22:38.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:38 np0005539552 nova_compute[233724]: 2025-11-29 08:22:38.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:22:38 np0005539552 nova_compute[233724]: 2025-11-29 08:22:38.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:22:39 np0005539552 nova_compute[233724]: 2025-11-29 08:22:39.094 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:39 np0005539552 nova_compute[233724]: 2025-11-29 08:22:39.095 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:39 np0005539552 nova_compute[233724]: 2025-11-29 08:22:39.095 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:22:39 np0005539552 nova_compute[233724]: 2025-11-29 08:22:39.095 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 07f760bf-6984-45e9-8e85-3d297e812553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:39 np0005539552 podman[285517]: 2025-11-29 08:22:39.97042091 +0000 UTC m=+0.053129259 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:22:39 np0005539552 podman[285516]: 2025-11-29 08:22:39.976853943 +0000 UTC m=+0.057526678 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:22:39 np0005539552 podman[285518]: 2025-11-29 08:22:39.997878659 +0000 UTC m=+0.078707518 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 03:22:40 np0005539552 nova_compute[233724]: 2025-11-29 08:22:40.420 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:41 np0005539552 nova_compute[233724]: 2025-11-29 08:22:41.440 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:41 np0005539552 nova_compute[233724]: 2025-11-29 08:22:41.549 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:41.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:41 np0005539552 nova_compute[233724]: 2025-11-29 08:22:41.928 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:41 np0005539552 nova_compute[233724]: 2025-11-29 08:22:41.928 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:22:41 np0005539552 nova_compute[233724]: 2025-11-29 08:22:41.929 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:42.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:42 np0005539552 nova_compute[233724]: 2025-11-29 08:22:42.755 233728 DEBUG oslo_concurrency.lockutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:42 np0005539552 nova_compute[233724]: 2025-11-29 08:22:42.755 233728 DEBUG oslo_concurrency.lockutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:42 np0005539552 nova_compute[233724]: 2025-11-29 08:22:42.783 233728 DEBUG nova.objects.instance [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'flavor' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:42 np0005539552 nova_compute[233724]: 2025-11-29 08:22:42.832 233728 DEBUG oslo_concurrency.lockutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.188 233728 DEBUG oslo_concurrency.lockutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.188 233728 DEBUG oslo_concurrency.lockutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.189 233728 INFO nova.compute.manager [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Attaching volume ca708729-36db-4dd2-9f09-88d87c483376 to /dev/vdb#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.441 233728 DEBUG os_brick.utils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.443 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.453 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.454 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[8db28ece-e32f-4f8b-80f2-252ac072f5fd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.455 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.465 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.465 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8faf96-a1c0-4bb9-9e0d-ad74f3b8445e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.467 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.483 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.483 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[4335f377-d4e5-4166-8350-b35b8fed94df]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.485 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdf8603-6ade-41fe-bfb6-aff78b0797be]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.485 233728 DEBUG oslo_concurrency.processutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.516 233728 DEBUG oslo_concurrency.processutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.518 233728 DEBUG os_brick.initiator.connectors.lightos [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.518 233728 DEBUG os_brick.initiator.connectors.lightos [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.519 233728 DEBUG os_brick.initiator.connectors.lightos [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.519 233728 DEBUG os_brick.utils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:22:43 np0005539552 nova_compute[233724]: 2025-11-29 08:22:43.519 233728 DEBUG nova.virt.block_device [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updating existing volume attachment record: d6278c17-8f17-4d71-a6a0-36665baf96a1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:22:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1728296579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.439 233728 DEBUG nova.objects.instance [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'flavor' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.468 233728 DEBUG nova.virt.libvirt.driver [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Attempting to attach volume ca708729-36db-4dd2-9f09-88d87c483376 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.472 233728 DEBUG nova.virt.libvirt.guest [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ca708729-36db-4dd2-9f09-88d87c483376">
Nov 29 03:22:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:22:44 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:44 np0005539552 nova_compute[233724]:  <serial>ca708729-36db-4dd2-9f09-88d87c483376</serial>
Nov 29 03:22:44 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:22:44 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.613 233728 DEBUG nova.virt.libvirt.driver [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.614 233728 DEBUG nova.virt.libvirt.driver [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.614 233728 DEBUG nova.virt.libvirt.driver [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.615 233728 DEBUG nova.virt.libvirt.driver [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:18:89:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:44.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:44 np0005539552 nova_compute[233724]: 2025-11-29 08:22:44.808 233728 DEBUG oslo_concurrency.lockutils [None req-edc1957d-acd4-4433-a117-474697bf52d8 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:45 np0005539552 nova_compute[233724]: 2025-11-29 08:22:45.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:46 np0005539552 nova_compute[233724]: 2025-11-29 08:22:46.441 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:47 np0005539552 nova_compute[233724]: 2025-11-29 08:22:47.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.470 233728 INFO nova.compute.manager [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Rebuilding instance#033[00m
Nov 29 03:22:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.870 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.889 233728 DEBUG nova.compute.manager [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.936 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.948 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.957 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.970 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.981 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:22:48 np0005539552 nova_compute[233724]: 2025-11-29 08:22:48.985 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:22:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:50 np0005539552 nova_compute[233724]: 2025-11-29 08:22:50.391 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404555.3903863, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:50 np0005539552 nova_compute[233724]: 2025-11-29 08:22:50.392 233728 INFO nova.compute.manager [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:50 np0005539552 nova_compute[233724]: 2025-11-29 08:22:50.410 233728 DEBUG nova.compute.manager [None req-5c03a745-6d59-47a0-8a7d-3a4d08c1663d - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:50 np0005539552 nova_compute[233724]: 2025-11-29 08:22:50.423 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539552 nova_compute[233724]: 2025-11-29 08:22:50.642 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:51 np0005539552 nova_compute[233724]: 2025-11-29 08:22:51.443 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.001 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:22:52 np0005539552 kernel: tapf2132fc4-69 (unregistering): left promiscuous mode
Nov 29 03:22:52 np0005539552 NetworkManager[48926]: <info>  [1764404572.0099] device (tapf2132fc4-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:52Z|00536|binding|INFO|Releasing lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a from this chassis (sb_readonly=0)
Nov 29 03:22:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:52Z|00537|binding|INFO|Setting lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a down in Southbound
Nov 29 03:22:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:52Z|00538|binding|INFO|Removing iface tapf2132fc4-69 ovn-installed in OVS
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.023 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.028 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:89:66 10.100.0.4'], port_security=['fa:16:3e:18:89:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f2e1d97-ea5c-43f8-a05f-2f531213d241', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8800adc1-baf2-4222-bbe6-bd173edc1243', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.030 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.032 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.037 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.054 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[06ecc772-2807-493f-b199-06b68620e7d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.084 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[00474bd6-4e7a-4e85-9fce-85264635f43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539552 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.087 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[68741da6-e562-49bb-84fb-5619446571f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539552 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000007b.scope: Consumed 18.354s CPU time.
Nov 29 03:22:52 np0005539552 systemd-machined[196379]: Machine qemu-51-instance-0000007b terminated.
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.112 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8af970-cd13-4cbf-aae6-6be33bac248a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.129 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfad9c6-2438-4aa1-9e36-6305c3b425c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285629, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.144 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac18c84-c67a-4396-b717-772534b5444b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747491, 'tstamp': 747491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285630, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747494, 'tstamp': 747494}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285630, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.146 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.151 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.151 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.152 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:52.152 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.244 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.248 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.249 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.254 233728 INFO nova.virt.libvirt.driver [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance destroyed successfully.#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.381 233728 DEBUG nova.compute.manager [req-6ed05485-5b34-4f18-8471-c832c5eb5655 req-486d5a3e-fc71-47c5-970d-df57d57f6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-unplugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.381 233728 DEBUG oslo_concurrency.lockutils [req-6ed05485-5b34-4f18-8471-c832c5eb5655 req-486d5a3e-fc71-47c5-970d-df57d57f6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.382 233728 DEBUG oslo_concurrency.lockutils [req-6ed05485-5b34-4f18-8471-c832c5eb5655 req-486d5a3e-fc71-47c5-970d-df57d57f6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.382 233728 DEBUG oslo_concurrency.lockutils [req-6ed05485-5b34-4f18-8471-c832c5eb5655 req-486d5a3e-fc71-47c5-970d-df57d57f6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.382 233728 DEBUG nova.compute.manager [req-6ed05485-5b34-4f18-8471-c832c5eb5655 req-486d5a3e-fc71-47c5-970d-df57d57f6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-unplugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.383 233728 WARNING nova.compute.manager [req-6ed05485-5b34-4f18-8471-c832c5eb5655 req-486d5a3e-fc71-47c5-970d-df57d57f6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received unexpected event network-vif-unplugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.453 233728 INFO nova.compute.manager [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Detaching volume ca708729-36db-4dd2-9f09-88d87c483376#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.709 233728 INFO nova.virt.block_device [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Attempting to driver detach volume ca708729-36db-4dd2-9f09-88d87c483376 from mountpoint /dev/vdb#033[00m
Nov 29 03:22:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.959 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Attempting to detach device vdb from instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.960 233728 DEBUG nova.virt.libvirt.guest [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:22:52 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ca708729-36db-4dd2-9f09-88d87c483376">
Nov 29 03:22:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:  <serial>ca708729-36db-4dd2-9f09-88d87c483376</serial>
Nov 29 03:22:52 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:22:52 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:22:52 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:22:52 np0005539552 nova_compute[233724]: 2025-11-29 08:22:52.974 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully detached device vdb from instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 from the persistent domain config.#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.241 233728 INFO nova.virt.libvirt.driver [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance destroyed successfully.#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.242 233728 DEBUG nova.virt.libvirt.vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1988120291',display_name='tempest-ServerActionsTestOtherA-server-1561025200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1988120291',id=123,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJKHc7P4YQ541Hb485a6CMBwKiv24SeFFAW+eSzfZUCbyqL5VCFCZG4/k60hFCpsJ0Tv3h26Uo1xnh37nbsxuBRWgRRa38dV/cocbcLIxwndoSfRH3ORp+Lk/2eG6aanw==',key_name='tempest-keypair-406941470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-4pdufn0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=8f2e1d97-ea5c-43f8-a05f-2f531213d241,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.242 233728 DEBUG nova.network.os_vif_util [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.243 233728 DEBUG nova.network.os_vif_util [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.244 233728 DEBUG os_vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.246 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.246 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2132fc4-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.249 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.252 233728 INFO os_vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69')#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.648 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deleting instance files /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241_del#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.649 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deletion of /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241_del complete#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.742 233728 INFO nova.virt.block_device [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Booting with volume ca708729-36db-4dd2-9f09-88d87c483376 at /dev/vdb#033[00m
Nov 29 03:22:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.937 233728 DEBUG os_brick.utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.938 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.949 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.950 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[19bc588e-a266-458e-a9f5-a75381b75f59]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.951 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.959 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.959 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee93bc3a-e627-4f65-a7af-2e3a1dc5201f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.961 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.970 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.970 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2682662-4c31-4ecc-8ac0-764f45901439]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.971 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f90b5fa-251f-4101-ac9a-78150bf52928]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:53 np0005539552 nova_compute[233724]: 2025-11-29 08:22:53.972 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.003 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.006 233728 DEBUG os_brick.initiator.connectors.lightos [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.006 233728 DEBUG os_brick.initiator.connectors.lightos [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.007 233728 DEBUG os_brick.initiator.connectors.lightos [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.007 233728 DEBUG os_brick.utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.007 233728 DEBUG nova.virt.block_device [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updating existing volume attachment record: e5403fcc-f5f5-410c-90a4-5c76b06c0975 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.538 233728 DEBUG nova.compute.manager [req-8f38244c-eb2c-40c5-b83c-14c34c094d7a req-5914a141-6fb6-4f03-8093-51443e2923cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.539 233728 DEBUG oslo_concurrency.lockutils [req-8f38244c-eb2c-40c5-b83c-14c34c094d7a req-5914a141-6fb6-4f03-8093-51443e2923cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.539 233728 DEBUG oslo_concurrency.lockutils [req-8f38244c-eb2c-40c5-b83c-14c34c094d7a req-5914a141-6fb6-4f03-8093-51443e2923cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.540 233728 DEBUG oslo_concurrency.lockutils [req-8f38244c-eb2c-40c5-b83c-14c34c094d7a req-5914a141-6fb6-4f03-8093-51443e2923cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.540 233728 DEBUG nova.compute.manager [req-8f38244c-eb2c-40c5-b83c-14c34c094d7a req-5914a141-6fb6-4f03-8093-51443e2923cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:54 np0005539552 nova_compute[233724]: 2025-11-29 08:22:54.540 233728 WARNING nova.compute.manager [req-8f38244c-eb2c-40c5-b83c-14c34c094d7a req-5914a141-6fb6-4f03-8093-51443e2923cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received unexpected event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with vm_state active and task_state rebuild_block_device_mapping.#033[00m
Nov 29 03:22:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2895186106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.127 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.128 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Creating image(s)#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.155 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.187 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.219 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.223 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.287 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.289 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.290 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.291 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.314 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.317 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.342 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.647 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.720 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] resizing rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:22:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:55.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.846 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.847 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Ensure instance console log exists: /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.848 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.848 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.848 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.852 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Start _get_guest_xml network_info=[{"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ca708729-36db-4dd2-9f09-88d87c483376', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ca708729-36db-4dd2-9f09-88d87c483376', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8f2e1d97-ea5c-43f8-a05f-2f531213d241', 'attached_at': '', 'detached_at': '', 'volume_id': 'ca708729-36db-4dd2-9f09-88d87c483376', 'serial': 'ca708729-36db-4dd2-9f09-88d87c483376'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': 'e5403fcc-f5f5-410c-90a4-5c76b06c0975', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.859 233728 WARNING nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.867 233728 DEBUG nova.virt.libvirt.host [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.868 233728 DEBUG nova.virt.libvirt.host [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.875 233728 DEBUG nova.virt.libvirt.host [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.876 233728 DEBUG nova.virt.libvirt.host [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.877 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.878 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.878 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.879 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.879 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.879 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.879 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.880 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.880 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.880 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.881 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.881 233728 DEBUG nova.virt.hardware [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.881 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:55 np0005539552 nova_compute[233724]: 2025-11-29 08:22:55.903 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2180512095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.398 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.435 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.443 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3796838703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.897 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.942 233728 DEBUG nova.virt.libvirt.vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1988120291',display_name='tempest-ServerActionsTestOtherA-server-1561025200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1988120291',id=123,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJKHc7P4YQ541Hb485a6CMBwKiv24SeFFAW+eSzfZUCbyqL5VCFCZG4/k60hFCpsJ0Tv3h26Uo1xnh37nbsxuBRWgRRa38dV/cocbcLIxwndoSfRH3ORp+Lk/2eG6aanw==',key_name='tempest-keypair-406941470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-4pdufn0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=8f2e1d97-ea5c-43f8-a05f-2f531213d241,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.942 233728 DEBUG nova.network.os_vif_util [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.944 233728 DEBUG nova.network.os_vif_util [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.947 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <uuid>8f2e1d97-ea5c-43f8-a05f-2f531213d241</uuid>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <name>instance-0000007b</name>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestOtherA-server-1561025200</nova:name>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:22:55</nova:creationTime>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <nova:port uuid="f2132fc4-6960-41d7-ba5a-a4fdffd50d3a">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <entry name="serial">8f2e1d97-ea5c-43f8-a05f-2f531213d241</entry>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <entry name="uuid">8f2e1d97-ea5c-43f8-a05f-2f531213d241</entry>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-ca708729-36db-4dd2-9f09-88d87c483376">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <serial>ca708729-36db-4dd2-9f09-88d87c483376</serial>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:18:89:66"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <target dev="tapf2132fc4-69"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/console.log" append="off"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:22:56 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:22:56 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:22:56 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:22:56 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.949 233728 DEBUG nova.virt.libvirt.vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1988120291',display_name='tempest-ServerActionsTestOtherA-server-1561025200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1988120291',id=123,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJKHc7P4YQ541Hb485a6CMBwKiv24SeFFAW+eSzfZUCbyqL5VCFCZG4/k60hFCpsJ0Tv3h26Uo1xnh37nbsxuBRWgRRa38dV/cocbcLIxwndoSfRH3ORp+Lk/2eG6aanw==',key_name='tempest-keypair-406941470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-4pdufn0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=8f2e1d97-ea5c-43f8-a05f-2f531213d241,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.949 233728 DEBUG nova.network.os_vif_util [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.950 233728 DEBUG nova.network.os_vif_util [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.950 233728 DEBUG os_vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.951 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.951 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.951 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.954 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.954 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2132fc4-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.955 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2132fc4-69, col_values=(('external_ids', {'iface-id': 'f2132fc4-6960-41d7-ba5a-a4fdffd50d3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:89:66', 'vm-uuid': '8f2e1d97-ea5c-43f8-a05f-2f531213d241'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.956 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539552 NetworkManager[48926]: <info>  [1764404576.9579] manager: (tapf2132fc4-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.959 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.962 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539552 nova_compute[233724]: 2025-11-29 08:22:56.963 233728 INFO os_vif [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69')#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.019 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.019 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.019 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.020 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:18:89:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.020 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Using config drive#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.041 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.059 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.109 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'keypairs' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.632 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Creating config drive at /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.640 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq3vqvk5f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:22:57 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.777 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq3vqvk5f" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.804 233728 DEBUG nova.storage.rbd_utils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.808 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.962 233728 DEBUG oslo_concurrency.processutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config 8f2e1d97-ea5c-43f8-a05f-2f531213d241_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:57 np0005539552 nova_compute[233724]: 2025-11-29 08:22:57.962 233728 INFO nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deleting local config drive /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241/disk.config because it was imported into RBD.#033[00m
Nov 29 03:22:58 np0005539552 kernel: tapf2132fc4-69: entered promiscuous mode
Nov 29 03:22:58 np0005539552 NetworkManager[48926]: <info>  [1764404578.0122] manager: (tapf2132fc4-69): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.014 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:58Z|00539|binding|INFO|Claiming lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for this chassis.
Nov 29 03:22:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:58Z|00540|binding|INFO|f2132fc4-6960-41d7-ba5a-a4fdffd50d3a: Claiming fa:16:3e:18:89:66 10.100.0.4
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.021 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:89:66 10.100.0.4'], port_security=['fa:16:3e:18:89:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f2e1d97-ea5c-43f8-a05f-2f531213d241', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8800adc1-baf2-4222-bbe6-bd173edc1243', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.029 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.030 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:22:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:58Z|00541|binding|INFO|Setting lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a ovn-installed in OVS
Nov 29 03:22:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:22:58Z|00542|binding|INFO|Setting lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a up in Southbound
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.040 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.046 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ffbe90-8173-48a2-ad0b-05bf8ea9b7aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:58 np0005539552 systemd-machined[196379]: New machine qemu-53-instance-0000007b.
Nov 29 03:22:58 np0005539552 systemd[1]: Started Virtual Machine qemu-53-instance-0000007b.
Nov 29 03:22:58 np0005539552 systemd-udevd[286110]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.080 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[336fcdc6-4aad-4410-a723-2c615619a01c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.083 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[73a9e0d6-3317-4a2c-9448-aa0005263300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:58 np0005539552 NetworkManager[48926]: <info>  [1764404578.0900] device (tapf2132fc4-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:58 np0005539552 NetworkManager[48926]: <info>  [1764404578.0910] device (tapf2132fc4-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.114 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9f875d93-7592-4f7f-91e7-6ef2dc359a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.133 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5f551dab-12f9-453a-a48e-5e2243f826ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286120, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.148 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[92e58866-e6c8-480d-87eb-d7b8dbf5b4fd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747491, 'tstamp': 747491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286121, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747494, 'tstamp': 747494}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286121, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.150 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.152 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.153 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.154 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.154 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:22:58.154 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.873 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 8f2e1d97-ea5c-43f8-a05f-2f531213d241 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.874 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404578.8735049, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.874 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.877 233728 DEBUG nova.compute.manager [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.877 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.880 233728 INFO nova.virt.libvirt.driver [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance spawned successfully.#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.880 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.919 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.923 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.923 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.923 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.924 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.924 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.924 233728 DEBUG nova.virt.libvirt.driver [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.929 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.971 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.971 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404578.8743296, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:58 np0005539552 nova_compute[233724]: 2025-11-29 08:22:58.971 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.021 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.023 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.043 233728 DEBUG nova.compute.manager [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.054 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.128 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.129 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.129 233728 DEBUG nova.objects.instance [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:22:59 np0005539552 nova_compute[233724]: 2025-11-29 08:22:59.216 233728 DEBUG oslo_concurrency.lockutils [None req-9c1c88cc-dcaf-471b-a7fe-2777e500a22d 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:22:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:59.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:00 np0005539552 nova_compute[233724]: 2025-11-29 08:23:00.352 233728 DEBUG nova.compute.manager [req-79111ef2-0e49-4915-b445-5a93abc3164c req-f170a125-1db0-4be4-87d7-9201c4c18de5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:00 np0005539552 nova_compute[233724]: 2025-11-29 08:23:00.353 233728 DEBUG oslo_concurrency.lockutils [req-79111ef2-0e49-4915-b445-5a93abc3164c req-f170a125-1db0-4be4-87d7-9201c4c18de5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:00 np0005539552 nova_compute[233724]: 2025-11-29 08:23:00.353 233728 DEBUG oslo_concurrency.lockutils [req-79111ef2-0e49-4915-b445-5a93abc3164c req-f170a125-1db0-4be4-87d7-9201c4c18de5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:00 np0005539552 nova_compute[233724]: 2025-11-29 08:23:00.353 233728 DEBUG oslo_concurrency.lockutils [req-79111ef2-0e49-4915-b445-5a93abc3164c req-f170a125-1db0-4be4-87d7-9201c4c18de5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:00 np0005539552 nova_compute[233724]: 2025-11-29 08:23:00.354 233728 DEBUG nova.compute.manager [req-79111ef2-0e49-4915-b445-5a93abc3164c req-f170a125-1db0-4be4-87d7-9201c4c18de5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:00 np0005539552 nova_compute[233724]: 2025-11-29 08:23:00.354 233728 WARNING nova.compute.manager [req-79111ef2-0e49-4915-b445-5a93abc3164c req-f170a125-1db0-4be4-87d7-9201c4c18de5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received unexpected event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:23:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539552 nova_compute[233724]: 2025-11-29 08:23:01.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539552 nova_compute[233724]: 2025-11-29 08:23:01.957 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.263 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.264 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.286 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.359 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.359 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.365 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.366 233728 INFO nova.compute.claims [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.453 233728 DEBUG nova.compute.manager [req-79cb6f8a-8927-44a5-b8c0-f05e756d064a req-79e1581e-5faf-4ba8-a081-2d3a84c0bd4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.453 233728 DEBUG oslo_concurrency.lockutils [req-79cb6f8a-8927-44a5-b8c0-f05e756d064a req-79e1581e-5faf-4ba8-a081-2d3a84c0bd4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.454 233728 DEBUG oslo_concurrency.lockutils [req-79cb6f8a-8927-44a5-b8c0-f05e756d064a req-79e1581e-5faf-4ba8-a081-2d3a84c0bd4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.454 233728 DEBUG oslo_concurrency.lockutils [req-79cb6f8a-8927-44a5-b8c0-f05e756d064a req-79e1581e-5faf-4ba8-a081-2d3a84c0bd4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.454 233728 DEBUG nova.compute.manager [req-79cb6f8a-8927-44a5-b8c0-f05e756d064a req-79e1581e-5faf-4ba8-a081-2d3a84c0bd4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.455 233728 WARNING nova.compute.manager [req-79cb6f8a-8927-44a5-b8c0-f05e756d064a req-79e1581e-5faf-4ba8-a081-2d3a84c0bd4c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received unexpected event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:23:02 np0005539552 nova_compute[233724]: 2025-11-29 08:23:02.591 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1868686437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.049 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.057 233728 DEBUG nova.compute.provider_tree [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.295 233728 DEBUG nova.scheduler.client.report [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.323 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.324 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.380 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.381 233728 DEBUG nova.network.neutron [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.406 233728 INFO nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.431 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.554 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.556 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.556 233728 INFO nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Creating image(s)#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.580 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.608 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.639 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.643 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.703 233728 DEBUG nova.policy [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac72c18a5c6f43faa861ede8af0e4363', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ac24a4866299495cb28b7d3f281ec632', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.730 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.730 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.731 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.731 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.758 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:03.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.763 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.975 233728 DEBUG oslo_concurrency.lockutils [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.976 233728 DEBUG oslo_concurrency.lockutils [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:03 np0005539552 nova_compute[233724]: 2025-11-29 08:23:03.996 233728 INFO nova.compute.manager [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Detaching volume ca708729-36db-4dd2-9f09-88d87c483376#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.015 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.087 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] resizing rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.181 233728 DEBUG nova.objects.instance [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lazy-loading 'migration_context' on Instance uuid 422691e1-bbe5-46bc-a828-8b7842bdbca6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.211 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.212 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Ensure instance console log exists: /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.212 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.213 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.213 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.313 233728 INFO nova.virt.block_device [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Attempting to driver detach volume ca708729-36db-4dd2-9f09-88d87c483376 from mountpoint /dev/vdb#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.325 233728 DEBUG nova.virt.libvirt.driver [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Attempting to detach device vdb from instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.326 233728 DEBUG nova.virt.libvirt.guest [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ca708729-36db-4dd2-9f09-88d87c483376">
Nov 29 03:23:04 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <serial>ca708729-36db-4dd2-9f09-88d87c483376</serial>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:23:04 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.333 233728 INFO nova.virt.libvirt.driver [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully detached device vdb from instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 from the persistent domain config.#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.333 233728 DEBUG nova.virt.libvirt.driver [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.334 233728 DEBUG nova.virt.libvirt.guest [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-ca708729-36db-4dd2-9f09-88d87c483376">
Nov 29 03:23:04 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <serial>ca708729-36db-4dd2-9f09-88d87c483376</serial>
Nov 29 03:23:04 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:23:04 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:23:04 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:23:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:04.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:04 np0005539552 nova_compute[233724]: 2025-11-29 08:23:04.868 233728 DEBUG nova.network.neutron [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Successfully created port: a93c763e-abba-4489-ad7c-fbe9bb7222b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:23:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:05.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.252 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764404586.251923, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.254 233728 DEBUG nova.virt.libvirt.driver [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.257 233728 INFO nova.virt.libvirt.driver [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully detached device vdb from instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241 from the live domain config.#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.449 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.568 233728 DEBUG nova.network.neutron [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Successfully updated port: a93c763e-abba-4489-ad7c-fbe9bb7222b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.586 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "refresh_cache-422691e1-bbe5-46bc-a828-8b7842bdbca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.586 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquired lock "refresh_cache-422691e1-bbe5-46bc-a828-8b7842bdbca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.586 233728 DEBUG nova.network.neutron [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:06.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.727 233728 DEBUG nova.objects.instance [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'flavor' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.734 233728 DEBUG nova.compute.manager [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-changed-a93c763e-abba-4489-ad7c-fbe9bb7222b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.734 233728 DEBUG nova.compute.manager [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Refreshing instance network info cache due to event network-changed-a93c763e-abba-4489-ad7c-fbe9bb7222b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.735 233728 DEBUG oslo_concurrency.lockutils [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-422691e1-bbe5-46bc-a828-8b7842bdbca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.830 233728 DEBUG oslo_concurrency.lockutils [None req-e4d406b0-50b3-470b-b9bf-c254f89f4b2b 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.891 233728 DEBUG nova.network.neutron [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:23:06 np0005539552 nova_compute[233724]: 2025-11-29 08:23:06.959 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:07.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.775 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.776 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.776 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.776 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.776 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.777 233728 INFO nova.compute.manager [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Terminating instance#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.778 233728 DEBUG nova.compute.manager [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:07 np0005539552 kernel: tapf2132fc4-69 (unregistering): left promiscuous mode
Nov 29 03:23:07 np0005539552 NetworkManager[48926]: <info>  [1764404587.8225] device (tapf2132fc4-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.829 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:07Z|00543|binding|INFO|Releasing lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a from this chassis (sb_readonly=0)
Nov 29 03:23:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:07Z|00544|binding|INFO|Setting lport f2132fc4-6960-41d7-ba5a-a4fdffd50d3a down in Southbound
Nov 29 03:23:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:07Z|00545|binding|INFO|Removing iface tapf2132fc4-69 ovn-installed in OVS
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.834 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.841 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:89:66 10.100.0.4'], port_security=['fa:16:3e:18:89:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8f2e1d97-ea5c-43f8-a05f-2f531213d241', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8800adc1-baf2-4222-bbe6-bd173edc1243', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.247', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.843 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f2132fc4-6960-41d7-ba5a-a4fdffd50d3a in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.846 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.846 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e63a90df-c35f-42d2-8cda-d76f7de2ce42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539552 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 29 03:23:07 np0005539552 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007b.scope: Consumed 9.884s CPU time.
Nov 29 03:23:07 np0005539552 systemd-machined[196379]: Machine qemu-53-instance-0000007b terminated.
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.909 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[cca713b8-379d-4a57-93a9-9323983d6c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.912 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9fb2df-e709-40b1-8922-da5baabaa01f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.939 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[881a97be-c949-4c64-b7ef-98fa221f77b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.960 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe6c2a7-1fdb-4924-8b26-2b006154067f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286490, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.980 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5e267a-7308-408b-9132-59c1f6c96caa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747491, 'tstamp': 747491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286491, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747494, 'tstamp': 747494}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286491, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.982 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539552 nova_compute[233724]: 2025-11-29 08:23:07.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.990 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.990 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.990 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:07.990 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.016 233728 INFO nova.virt.libvirt.driver [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Instance destroyed successfully.#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.016 233728 DEBUG nova.objects.instance [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 8f2e1d97-ea5c-43f8-a05f-2f531213d241 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.041 233728 DEBUG nova.virt.libvirt.vif [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1988120291',display_name='tempest-ServerActionsTestOtherA-server-1561025200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1988120291',id=123,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJKHc7P4YQ541Hb485a6CMBwKiv24SeFFAW+eSzfZUCbyqL5VCFCZG4/k60hFCpsJ0Tv3h26Uo1xnh37nbsxuBRWgRRa38dV/cocbcLIxwndoSfRH3ORp+Lk/2eG6aanw==',key_name='tempest-keypair-406941470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-4pdufn0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=8f2e1d97-ea5c-43f8-a05f-2f531213d241,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.042 233728 DEBUG nova.network.os_vif_util [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "address": "fa:16:3e:18:89:66", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2132fc4-69", "ovs_interfaceid": "f2132fc4-6960-41d7-ba5a-a4fdffd50d3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.043 233728 DEBUG nova.network.os_vif_util [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.044 233728 DEBUG os_vif [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.046 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.047 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2132fc4-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.051 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.053 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.056 233728 INFO os_vif [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:89:66,bridge_name='br-int',has_traffic_filtering=True,id=f2132fc4-6960-41d7-ba5a-a4fdffd50d3a,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2132fc4-69')#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.087 233728 DEBUG nova.network.neutron [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Updating instance_info_cache with network_info: [{"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.114 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Releasing lock "refresh_cache-422691e1-bbe5-46bc-a828-8b7842bdbca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.114 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Instance network_info: |[{"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.115 233728 DEBUG oslo_concurrency.lockutils [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-422691e1-bbe5-46bc-a828-8b7842bdbca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.115 233728 DEBUG nova.network.neutron [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Refreshing network info cache for port a93c763e-abba-4489-ad7c-fbe9bb7222b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.118 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Start _get_guest_xml network_info=[{"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.124 233728 WARNING nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.130 233728 DEBUG nova.virt.libvirt.host [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.131 233728 DEBUG nova.virt.libvirt.host [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.141 233728 DEBUG nova.virt.libvirt.host [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.142 233728 DEBUG nova.virt.libvirt.host [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.144 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.145 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.146 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.146 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.147 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.147 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.147 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.148 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.148 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.149 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.149 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.150 233728 DEBUG nova.virt.hardware [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.154 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3008959333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.631 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.656 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.660 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.858 233728 DEBUG nova.compute.manager [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-unplugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.859 233728 DEBUG oslo_concurrency.lockutils [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.859 233728 DEBUG oslo_concurrency.lockutils [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.859 233728 DEBUG oslo_concurrency.lockutils [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.859 233728 DEBUG nova.compute.manager [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-unplugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.860 233728 DEBUG nova.compute.manager [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-unplugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.860 233728 DEBUG nova.compute.manager [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.860 233728 DEBUG oslo_concurrency.lockutils [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.860 233728 DEBUG oslo_concurrency.lockutils [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.861 233728 DEBUG oslo_concurrency.lockutils [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.861 233728 DEBUG nova.compute.manager [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] No waiting events found dispatching network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:08 np0005539552 nova_compute[233724]: 2025-11-29 08:23:08.861 233728 WARNING nova.compute.manager [req-399e84ca-0cbd-476c-8f60-750949a3fc78 req-32a3cc8d-9cd6-4ba0-9688-392cef989c1b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received unexpected event network-vif-plugged-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:23:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/9650241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.108 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.110 233728 DEBUG nova.virt.libvirt.vif [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1723376396',display_name='tempest-ServerMetadataTestJSON-server-1723376396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1723376396',id=127,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ac24a4866299495cb28b7d3f281ec632',ramdisk_id='',reservation_id='r-sr4jia2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-2113564496',owner_user_name='tempest-ServerMetadataTestJSON-2113564496-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:03Z,user_data=None,user_id='ac72c18a5c6f43faa861ede8af0e4363',uuid=422691e1-bbe5-46bc-a828-8b7842bdbca6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.110 233728 DEBUG nova.network.os_vif_util [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Converting VIF {"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.111 233728 DEBUG nova.network.os_vif_util [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.112 233728 DEBUG nova.objects.instance [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lazy-loading 'pci_devices' on Instance uuid 422691e1-bbe5-46bc-a828-8b7842bdbca6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.120 233728 INFO nova.virt.libvirt.driver [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deleting instance files /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241_del#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.120 233728 INFO nova.virt.libvirt.driver [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deletion of /var/lib/nova/instances/8f2e1d97-ea5c-43f8-a05f-2f531213d241_del complete#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.145 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <uuid>422691e1-bbe5-46bc-a828-8b7842bdbca6</uuid>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <name>instance-0000007f</name>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerMetadataTestJSON-server-1723376396</nova:name>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:23:08</nova:creationTime>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:user uuid="ac72c18a5c6f43faa861ede8af0e4363">tempest-ServerMetadataTestJSON-2113564496-project-member</nova:user>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:project uuid="ac24a4866299495cb28b7d3f281ec632">tempest-ServerMetadataTestJSON-2113564496</nova:project>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <nova:port uuid="a93c763e-abba-4489-ad7c-fbe9bb7222b7">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <entry name="serial">422691e1-bbe5-46bc-a828-8b7842bdbca6</entry>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <entry name="uuid">422691e1-bbe5-46bc-a828-8b7842bdbca6</entry>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/422691e1-bbe5-46bc-a828-8b7842bdbca6_disk">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/422691e1-bbe5-46bc-a828-8b7842bdbca6_disk.config">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:2c:6a:93"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <target dev="tapa93c763e-ab"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/console.log" append="off"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:23:09 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:23:09 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:23:09 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:23:09 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.146 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Preparing to wait for external event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.147 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.147 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.148 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.149 233728 DEBUG nova.virt.libvirt.vif [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1723376396',display_name='tempest-ServerMetadataTestJSON-server-1723376396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1723376396',id=127,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ac24a4866299495cb28b7d3f281ec632',ramdisk_id='',reservation_id='r-sr4jia2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-2113564496',owner_user_name='tempest-ServerMetadataTestJSON-2113564496-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:03Z,user_data=None,user_id='ac72c18a5c6f43faa861ede8af0e4363',uuid=422691e1-bbe5-46bc-a828-8b7842bdbca6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.150 233728 DEBUG nova.network.os_vif_util [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Converting VIF {"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.151 233728 DEBUG nova.network.os_vif_util [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.151 233728 DEBUG os_vif [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.153 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.154 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.154 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.158 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.159 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa93c763e-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.159 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa93c763e-ab, col_values=(('external_ids', {'iface-id': 'a93c763e-abba-4489-ad7c-fbe9bb7222b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:6a:93', 'vm-uuid': '422691e1-bbe5-46bc-a828-8b7842bdbca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.161 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:09 np0005539552 NetworkManager[48926]: <info>  [1764404589.1628] manager: (tapa93c763e-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.164 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.170 233728 INFO os_vif [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab')#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.174 233728 INFO nova.compute.manager [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Took 1.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.174 233728 DEBUG oslo.service.loopingcall [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.174 233728 DEBUG nova.compute.manager [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.174 233728 DEBUG nova.network.neutron [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.228 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.228 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.229 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] No VIF found with MAC fa:16:3e:2c:6a:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.229 233728 INFO nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Using config drive#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.258 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.702 233728 INFO nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Creating config drive at /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/disk.config#033[00m
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.709 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsqdd6sfx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:09.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:09 np0005539552 nova_compute[233724]: 2025-11-29 08:23:09.863 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsqdd6sfx" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.069 233728 DEBUG nova.storage.rbd_utils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] rbd image 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.075 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/disk.config 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.124 233728 DEBUG nova.network.neutron [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Updated VIF entry in instance network info cache for port a93c763e-abba-4489-ad7c-fbe9bb7222b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.125 233728 DEBUG nova.network.neutron [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Updating instance_info_cache with network_info: [{"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.132 233728 DEBUG nova.network.neutron [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.145 233728 DEBUG oslo_concurrency.lockutils [req-11b4f5f2-28cf-49d3-b619-a2659ca7c1c0 req-3f81dbdb-0ebc-40fa-943e-7598014ef6f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-422691e1-bbe5-46bc-a828-8b7842bdbca6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.151 233728 INFO nova.compute.manager [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Took 0.98 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.209 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.209 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.230 233728 DEBUG oslo_concurrency.processutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/disk.config 422691e1-bbe5-46bc-a828-8b7842bdbca6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.231 233728 INFO nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Deleting local config drive /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6/disk.config because it was imported into RBD.#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.246 233728 DEBUG nova.compute.manager [req-aee80fca-c635-41da-b96a-3d4aa41a1c39 req-12cb7f2a-4d19-4e64-9b74-f215371df309 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Received event network-vif-deleted-f2132fc4-6960-41d7-ba5a-a4fdffd50d3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:10 np0005539552 kernel: tapa93c763e-ab: entered promiscuous mode
Nov 29 03:23:10 np0005539552 systemd-udevd[286483]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:10 np0005539552 NetworkManager[48926]: <info>  [1764404590.2809] manager: (tapa93c763e-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.280 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:10Z|00546|binding|INFO|Claiming lport a93c763e-abba-4489-ad7c-fbe9bb7222b7 for this chassis.
Nov 29 03:23:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:10Z|00547|binding|INFO|a93c763e-abba-4489-ad7c-fbe9bb7222b7: Claiming fa:16:3e:2c:6a:93 10.100.0.11
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.287 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:6a:93 10.100.0.11'], port_security=['fa:16:3e:2c:6a:93 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '422691e1-bbe5-46bc-a828-8b7842bdbca6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac24a4866299495cb28b7d3f281ec632', 'neutron:revision_number': '2', 'neutron:security_group_ids': '253d2b2f-9235-45b1-961c-20b677339f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9dae082-60d1-4cda-8fbe-bda54fb2c7c8, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a93c763e-abba-4489-ad7c-fbe9bb7222b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.288 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a93c763e-abba-4489-ad7c-fbe9bb7222b7 in datapath 08ca2a6f-4677-47d6-80c2-9dfab3beaf03 bound to our chassis#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.289 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08ca2a6f-4677-47d6-80c2-9dfab3beaf03#033[00m
Nov 29 03:23:10 np0005539552 NetworkManager[48926]: <info>  [1764404590.2900] device (tapa93c763e-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:10 np0005539552 NetworkManager[48926]: <info>  [1764404590.2914] device (tapa93c763e-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:10Z|00548|binding|INFO|Setting lport a93c763e-abba-4489-ad7c-fbe9bb7222b7 ovn-installed in OVS
Nov 29 03:23:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:10Z|00549|binding|INFO|Setting lport a93c763e-abba-4489-ad7c-fbe9bb7222b7 up in Southbound
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.301 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ca1ba4-2110-426c-8170-76796347eb6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.302 233728 DEBUG oslo_concurrency.processutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.302 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08ca2a6f-41 in ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.304 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08ca2a6f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.304 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4a66eb-2117-4e8e-a7bb-6996110a5781]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.305 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[91090cba-f7da-46c2-8966-c89774a004be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.320 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[51dcd5b0-3f28-43af-9afb-a24c4e8a5303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 systemd-machined[196379]: New machine qemu-54-instance-0000007f.
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.332 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:10 np0005539552 systemd[1]: Started Virtual Machine qemu-54-instance-0000007f.
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.346 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[579743aa-dfff-4039-a7ba-e877f463e3a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 podman[286660]: 2025-11-29 08:23:10.36787425 +0000 UTC m=+0.060228801 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.377 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[38662801-635e-4d92-9cdf-fe6a37112365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 NetworkManager[48926]: <info>  [1764404590.3847] manager: (tap08ca2a6f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.383 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[08c81602-522d-4de6-9287-3c5c93fb4747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 podman[286662]: 2025-11-29 08:23:10.418224245 +0000 UTC m=+0.107690919 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.425 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3486937c-e140-48ab-8a72-6bbe69caeb8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 podman[286663]: 2025-11-29 08:23:10.430067544 +0000 UTC m=+0.118853149 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.428 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a537dd53-0993-46c1-b121-f0ab6c138f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 NetworkManager[48926]: <info>  [1764404590.4610] device (tap08ca2a6f-40): carrier: link connected
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.466 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6ea60f-5f9b-4138-a7ce-f14f7e01e6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.481 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f53554fa-be44-455a-86c0-b878680864aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08ca2a6f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:4c:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763614, 'reachable_time': 18985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286771, 'error': None, 'target': 'ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.495 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2457e0-aef9-41df-a1ae-8febc52c69f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:4c34'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763614, 'tstamp': 763614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286772, 'error': None, 'target': 'ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.511 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5231b1-7bdb-4227-9690-ff1b949be308]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08ca2a6f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:4c:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763614, 'reachable_time': 18985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286773, 'error': None, 'target': 'ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.545 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9bc809-06d4-42f4-8b31-a71fae6478ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.603 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bbce29-f8ce-42d0-be20-63efae5151bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.604 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08ca2a6f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.605 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.605 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08ca2a6f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.607 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:10 np0005539552 kernel: tap08ca2a6f-40: entered promiscuous mode
Nov 29 03:23:10 np0005539552 NetworkManager[48926]: <info>  [1764404590.6099] manager: (tap08ca2a6f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.610 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08ca2a6f-40, col_values=(('external_ids', {'iface-id': 'f3b065f7-a803-4169-841e-651867ed42b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:10Z|00550|binding|INFO|Releasing lport f3b065f7-a803-4169-841e-651867ed42b4 from this chassis (sb_readonly=0)
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.611 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.635 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08ca2a6f-4677-47d6-80c2-9dfab3beaf03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08ca2a6f-4677-47d6-80c2-9dfab3beaf03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.636 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aa573340-1b85-4bbe-a9fb-ce81d947232d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.636 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-08ca2a6f-4677-47d6-80c2-9dfab3beaf03
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/08ca2a6f-4677-47d6-80c2-9dfab3beaf03.pid.haproxy
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 08ca2a6f-4677-47d6-80c2-9dfab3beaf03
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:10.638 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'env', 'PROCESS_TAG=haproxy-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08ca2a6f-4677-47d6-80c2-9dfab3beaf03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:10.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2066362800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.789 233728 DEBUG oslo_concurrency.processutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.795 233728 DEBUG nova.compute.provider_tree [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.808 233728 DEBUG nova.scheduler.client.report [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.826 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.858 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404590.8583992, 422691e1-bbe5-46bc-a828-8b7842bdbca6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.859 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.862 233728 INFO nova.scheduler.client.report [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Deleted allocations for instance 8f2e1d97-ea5c-43f8-a05f-2f531213d241#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.895 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.898 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404590.8607647, 422691e1-bbe5-46bc-a828-8b7842bdbca6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.898 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.924 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.928 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.943 233728 DEBUG oslo_concurrency.lockutils [None req-017d5264-8caf-45bb-a039-27be5295cb74 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "8f2e1d97-ea5c-43f8-a05f-2f531213d241" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.946 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.974 233728 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.975 233728 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.975 233728 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.975 233728 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.976 233728 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Processing event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.976 233728 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.976 233728 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.976 233728 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.977 233728 DEBUG oslo_concurrency.lockutils [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.977 233728 DEBUG nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] No waiting events found dispatching network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.977 233728 WARNING nova.compute.manager [req-b0078b15-75f0-4604-936b-e8107e63d1cd req-b39fecc6-4a88-4112-afb6-2dabaf390b75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received unexpected event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.978 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.981 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404590.9806082, 422691e1-bbe5-46bc-a828-8b7842bdbca6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.981 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.982 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.985 233728 INFO nova.virt.libvirt.driver [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Instance spawned successfully.#033[00m
Nov 29 03:23:10 np0005539552 nova_compute[233724]: 2025-11-29 08:23:10.985 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.012 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.020 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.023 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.024 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.024 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.025 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.025 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.025 233728 DEBUG nova.virt.libvirt.driver [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:11 np0005539552 podman[286850]: 2025-11-29 08:23:11.058171743 +0000 UTC m=+0.052374320 container create 84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.067 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:11 np0005539552 systemd[1]: Started libpod-conmon-84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3.scope.
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.097 233728 INFO nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Took 7.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.098 233728 DEBUG nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:11 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:23:11 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca223ee8bb2dfef45ccf292eeb1e3bd3424abc35cad5cc8dd0acacc4b286c47/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:11 np0005539552 podman[286850]: 2025-11-29 08:23:11.032156453 +0000 UTC m=+0.026359050 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:11 np0005539552 podman[286850]: 2025-11-29 08:23:11.138179016 +0000 UTC m=+0.132381593 container init 84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:11 np0005539552 podman[286850]: 2025-11-29 08:23:11.142935674 +0000 UTC m=+0.137138241 container start 84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:23:11 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [NOTICE]   (286869) : New worker (286871) forked
Nov 29 03:23:11 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [NOTICE]   (286869) : Loading success.
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.177 233728 INFO nova.compute.manager [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Took 8.84 seconds to build instance.#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.192 233728 DEBUG oslo_concurrency.lockutils [None req-73fcf872-37c1-4551-a913-9cac81acaaef ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:11 np0005539552 nova_compute[233724]: 2025-11-29 08:23:11.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:12.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:13.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:14 np0005539552 nova_compute[233724]: 2025-11-29 08:23:14.163 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:14.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:15.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.851 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.852 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.852 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.853 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.853 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.854 233728 INFO nova.compute.manager [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Terminating instance#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.855 233728 DEBUG nova.compute.manager [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:15 np0005539552 kernel: tapa93c763e-ab (unregistering): left promiscuous mode
Nov 29 03:23:15 np0005539552 NetworkManager[48926]: <info>  [1764404595.8900] device (tapa93c763e-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:15Z|00551|binding|INFO|Releasing lport a93c763e-abba-4489-ad7c-fbe9bb7222b7 from this chassis (sb_readonly=0)
Nov 29 03:23:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:15Z|00552|binding|INFO|Setting lport a93c763e-abba-4489-ad7c-fbe9bb7222b7 down in Southbound
Nov 29 03:23:15 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:15Z|00553|binding|INFO|Removing iface tapa93c763e-ab ovn-installed in OVS
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.905 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.907 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:15.916 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:6a:93 10.100.0.11'], port_security=['fa:16:3e:2c:6a:93 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '422691e1-bbe5-46bc-a828-8b7842bdbca6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac24a4866299495cb28b7d3f281ec632', 'neutron:revision_number': '4', 'neutron:security_group_ids': '253d2b2f-9235-45b1-961c-20b677339f7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9dae082-60d1-4cda-8fbe-bda54fb2c7c8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a93c763e-abba-4489-ad7c-fbe9bb7222b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:15.917 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a93c763e-abba-4489-ad7c-fbe9bb7222b7 in datapath 08ca2a6f-4677-47d6-80c2-9dfab3beaf03 unbound from our chassis#033[00m
Nov 29 03:23:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:15.919 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08ca2a6f-4677-47d6-80c2-9dfab3beaf03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:15.919 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bcb21e-17f2-4823-b021-fefe3b6e5bbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:15.920 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03 namespace which is not needed anymore#033[00m
Nov 29 03:23:15 np0005539552 nova_compute[233724]: 2025-11-29 08:23:15.938 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:15 np0005539552 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 29 03:23:15 np0005539552 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000007f.scope: Consumed 5.551s CPU time.
Nov 29 03:23:15 np0005539552 systemd-machined[196379]: Machine qemu-54-instance-0000007f terminated.
Nov 29 03:23:16 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [NOTICE]   (286869) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:16 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [NOTICE]   (286869) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:16 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [WARNING]  (286869) : Exiting Master process...
Nov 29 03:23:16 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [WARNING]  (286869) : Exiting Master process...
Nov 29 03:23:16 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [ALERT]    (286869) : Current worker (286871) exited with code 143 (Terminated)
Nov 29 03:23:16 np0005539552 neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03[286865]: [WARNING]  (286869) : All workers exited. Exiting... (0)
Nov 29 03:23:16 np0005539552 systemd[1]: libpod-84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3.scope: Deactivated successfully.
Nov 29 03:23:16 np0005539552 podman[286905]: 2025-11-29 08:23:16.053511026 +0000 UTC m=+0.041601561 container died 84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.073 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.081 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 systemd[1]: var-lib-containers-storage-overlay-1ca223ee8bb2dfef45ccf292eeb1e3bd3424abc35cad5cc8dd0acacc4b286c47-merged.mount: Deactivated successfully.
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.088 233728 DEBUG nova.compute.manager [req-dab235ca-808c-4fcb-b1cb-ea617b20e608 req-d49e4647-502e-4d37-8a53-d1679e40b407 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-vif-unplugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.089 233728 DEBUG oslo_concurrency.lockutils [req-dab235ca-808c-4fcb-b1cb-ea617b20e608 req-d49e4647-502e-4d37-8a53-d1679e40b407 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.089 233728 DEBUG oslo_concurrency.lockutils [req-dab235ca-808c-4fcb-b1cb-ea617b20e608 req-d49e4647-502e-4d37-8a53-d1679e40b407 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.089 233728 DEBUG oslo_concurrency.lockutils [req-dab235ca-808c-4fcb-b1cb-ea617b20e608 req-d49e4647-502e-4d37-8a53-d1679e40b407 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.089 233728 DEBUG nova.compute.manager [req-dab235ca-808c-4fcb-b1cb-ea617b20e608 req-d49e4647-502e-4d37-8a53-d1679e40b407 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] No waiting events found dispatching network-vif-unplugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.090 233728 DEBUG nova.compute.manager [req-dab235ca-808c-4fcb-b1cb-ea617b20e608 req-d49e4647-502e-4d37-8a53-d1679e40b407 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-vif-unplugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.090 233728 INFO nova.virt.libvirt.driver [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Instance destroyed successfully.#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.090 233728 DEBUG nova.objects.instance [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lazy-loading 'resources' on Instance uuid 422691e1-bbe5-46bc-a828-8b7842bdbca6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:16 np0005539552 podman[286905]: 2025-11-29 08:23:16.094426577 +0000 UTC m=+0.082517112 container cleanup 84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:16 np0005539552 systemd[1]: libpod-conmon-84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3.scope: Deactivated successfully.
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.111 233728 DEBUG nova.virt.libvirt.vif [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1723376396',display_name='tempest-ServerMetadataTestJSON-server-1723376396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1723376396',id=127,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ac24a4866299495cb28b7d3f281ec632',ramdisk_id='',reservation_id='r-sr4jia2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-2113564496',owner_user_name='tempest-ServerMetadataTestJSON-2113564496-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:15Z,user_data=None,user_id='ac72c18a5c6f43faa861ede8af0e4363',uuid=422691e1-bbe5-46bc-a828-8b7842bdbca6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.112 233728 DEBUG nova.network.os_vif_util [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Converting VIF {"id": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "address": "fa:16:3e:2c:6a:93", "network": {"id": "08ca2a6f-4677-47d6-80c2-9dfab3beaf03", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1798725435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac24a4866299495cb28b7d3f281ec632", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa93c763e-ab", "ovs_interfaceid": "a93c763e-abba-4489-ad7c-fbe9bb7222b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.113 233728 DEBUG nova.network.os_vif_util [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.114 233728 DEBUG os_vif [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.115 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.115 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa93c763e-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.118 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.120 233728 INFO os_vif [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:6a:93,bridge_name='br-int',has_traffic_filtering=True,id=a93c763e-abba-4489-ad7c-fbe9bb7222b7,network=Network(08ca2a6f-4677-47d6-80c2-9dfab3beaf03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa93c763e-ab')#033[00m
Nov 29 03:23:16 np0005539552 podman[286943]: 2025-11-29 08:23:16.159894988 +0000 UTC m=+0.044081607 container remove 84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.165 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8a5a0c-0570-4859-bf17-3da65a4f0277]: (4, ('Sat Nov 29 08:23:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03 (84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3)\n84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3\nSat Nov 29 08:23:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03 (84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3)\n84d4992938a09573e6dd2152a855882f3c9b6846ed1e2779dfe7794f371b00e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.166 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d8c0a7-35d3-41c1-912f-5f6236590483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.167 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08ca2a6f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 kernel: tap08ca2a6f-40: left promiscuous mode
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.183 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.186 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41be14c2-f4c6-4e7f-88dc-0e5f4a89cad8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.206 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b24f308-3a01-4764-9b15-ebfb6f4eda00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.207 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26524260-19cf-4839-907a-a58198d84dca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.222 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[10a72f39-a9b1-4328-bb52-1043468e8074]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763605, 'reachable_time': 21475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286976, 'error': None, 'target': 'ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.224 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08ca2a6f-4677-47d6-80c2-9dfab3beaf03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:16.224 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[2368b531-0cf8-4b96-bd19-93e0df9e3a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539552 systemd[1]: run-netns-ovnmeta\x2d08ca2a6f\x2d4677\x2d47d6\x2d80c2\x2d9dfab3beaf03.mount: Deactivated successfully.
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.456 233728 INFO nova.virt.libvirt.driver [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Deleting instance files /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6_del#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.457 233728 INFO nova.virt.libvirt.driver [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Deletion of /var/lib/nova/instances/422691e1-bbe5-46bc-a828-8b7842bdbca6_del complete#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.524 233728 INFO nova.compute.manager [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.525 233728 DEBUG oslo.service.loopingcall [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.525 233728 DEBUG nova.compute.manager [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:23:16 np0005539552 nova_compute[233724]: 2025-11-29 08:23:16.526 233728 DEBUG nova.network.neutron [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:23:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:16.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.115 233728 DEBUG nova.network.neutron [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.134 233728 INFO nova.compute.manager [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Took 0.61 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.186 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.187 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.234 233728 DEBUG nova.compute.manager [req-24f4720a-5b1f-4005-85ae-adaae3915fd5 req-2bdb0fa6-403c-4bb9-8b0b-83ab66ced861 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-vif-deleted-a93c763e-abba-4489-ad7c-fbe9bb7222b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.254 233728 DEBUG oslo_concurrency.processutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/709344351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.766 233728 DEBUG oslo_concurrency.processutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.773 233728 DEBUG nova.compute.provider_tree [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:17.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.807 233728 DEBUG nova.scheduler.client.report [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.827 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.871 233728 INFO nova.scheduler.client.report [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Deleted allocations for instance 422691e1-bbe5-46bc-a828-8b7842bdbca6#033[00m
Nov 29 03:23:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:17 np0005539552 nova_compute[233724]: 2025-11-29 08:23:17.977 233728 DEBUG oslo_concurrency.lockutils [None req-400c45b5-4d90-4aba-a312-b0ddbef21342 ac72c18a5c6f43faa861ede8af0e4363 ac24a4866299495cb28b7d3f281ec632 - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:18 np0005539552 nova_compute[233724]: 2025-11-29 08:23:18.220 233728 DEBUG nova.compute.manager [req-ac83e36f-5b70-4a2d-b805-42ac6d4ace69 req-8f79e1c1-17cc-416b-9b27-a6141128cd20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:18 np0005539552 nova_compute[233724]: 2025-11-29 08:23:18.221 233728 DEBUG oslo_concurrency.lockutils [req-ac83e36f-5b70-4a2d-b805-42ac6d4ace69 req-8f79e1c1-17cc-416b-9b27-a6141128cd20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:18 np0005539552 nova_compute[233724]: 2025-11-29 08:23:18.222 233728 DEBUG oslo_concurrency.lockutils [req-ac83e36f-5b70-4a2d-b805-42ac6d4ace69 req-8f79e1c1-17cc-416b-9b27-a6141128cd20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:18 np0005539552 nova_compute[233724]: 2025-11-29 08:23:18.222 233728 DEBUG oslo_concurrency.lockutils [req-ac83e36f-5b70-4a2d-b805-42ac6d4ace69 req-8f79e1c1-17cc-416b-9b27-a6141128cd20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "422691e1-bbe5-46bc-a828-8b7842bdbca6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:18 np0005539552 nova_compute[233724]: 2025-11-29 08:23:18.222 233728 DEBUG nova.compute.manager [req-ac83e36f-5b70-4a2d-b805-42ac6d4ace69 req-8f79e1c1-17cc-416b-9b27-a6141128cd20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] No waiting events found dispatching network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:18 np0005539552 nova_compute[233724]: 2025-11-29 08:23:18.223 233728 WARNING nova.compute.manager [req-ac83e36f-5b70-4a2d-b805-42ac6d4ace69 req-8f79e1c1-17cc-416b-9b27-a6141128cd20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Received unexpected event network-vif-plugged-a93c763e-abba-4489-ad7c-fbe9bb7222b7 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:23:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:18.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:19.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:20.630 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:20.632 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:20.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:21 np0005539552 nova_compute[233724]: 2025-11-29 08:23:21.119 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539552 nova_compute[233724]: 2025-11-29 08:23:21.455 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:21.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:22.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:23 np0005539552 nova_compute[233724]: 2025-11-29 08:23:23.015 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404588.0137584, 8f2e1d97-ea5c-43f8-a05f-2f531213d241 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:23 np0005539552 nova_compute[233724]: 2025-11-29 08:23:23.015 233728 INFO nova.compute.manager [-] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:23 np0005539552 nova_compute[233724]: 2025-11-29 08:23:23.052 233728 DEBUG nova.compute.manager [None req-0d07336d-2dc1-4dc5-9d97-47374d335f4e - - - - - -] [instance: 8f2e1d97-ea5c-43f8-a05f-2f531213d241] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:23.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:25Z|00554|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:23:25 np0005539552 nova_compute[233724]: 2025-11-29 08:23:25.679 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:26 np0005539552 nova_compute[233724]: 2025-11-29 08:23:26.121 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:26 np0005539552 nova_compute[233724]: 2025-11-29 08:23:26.458 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:26.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:28.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:30.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.084 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404596.0830472, 422691e1-bbe5-46bc-a828-8b7842bdbca6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.084 233728 INFO nova.compute.manager [-] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.107 233728 DEBUG nova.compute.manager [None req-956aada0-a73c-4570-ae30-5477c6e32372 - - - - - -] [instance: 422691e1-bbe5-46bc-a828-8b7842bdbca6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.956 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.956 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.957 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.957 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:31.981 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:31.982 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:23:31 np0005539552 nova_compute[233724]: 2025-11-29 08:23:31.993 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1716233137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.417 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.507 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.508 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.715 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.717 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4121MB free_disk=20.830638885498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.717 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.718 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.800 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 07f760bf-6984-45e9-8e85-3d297e812553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.801 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.801 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:23:32 np0005539552 nova_compute[233724]: 2025-11-29 08:23:32.838 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3912294879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:33 np0005539552 nova_compute[233724]: 2025-11-29 08:23:33.360 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:33 np0005539552 nova_compute[233724]: 2025-11-29 08:23:33.366 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:33 np0005539552 nova_compute[233724]: 2025-11-29 08:23:33.390 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:33 np0005539552 nova_compute[233724]: 2025-11-29 08:23:33.416 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:23:33 np0005539552 nova_compute[233724]: 2025-11-29 08:23:33.417 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:35 np0005539552 nova_compute[233724]: 2025-11-29 08:23:35.143 233728 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Creating tmpfile /var/lib/nova/instances/tmpxu9nn7ze to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 03:23:35 np0005539552 nova_compute[233724]: 2025-11-29 08:23:35.279 233728 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 03:23:35 np0005539552 nova_compute[233724]: 2025-11-29 08:23:35.417 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:35 np0005539552 nova_compute[233724]: 2025-11-29 08:23:35.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:35 np0005539552 nova_compute[233724]: 2025-11-29 08:23:35.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:23:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:35.984 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.075 233728 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='27bb49e9-1b5b-452b-89e4-21008913f536',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.116 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.116 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.117 233728 DEBUG nova.network.neutron [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.127 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.462 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:36.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:36 np0005539552 nova_compute[233724]: 2025-11-29 08:23:36.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.835 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.851 233728 DEBUG nova.network.neutron [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.905 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.907 233728 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='27bb49e9-1b5b-452b-89e4-21008913f536',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.908 233728 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Creating instance directory: /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.908 233728 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Ensure instance console log exists: /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.908 233728 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.909 233728 DEBUG nova.virt.libvirt.vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-265826808',display_name='tempest-TestNetworkAdvancedServerOps-server-265826808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-265826808',id=126,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeXTTUn8A7zpFHcqlGQ90V4zAP7o75tcD3W3n7dQTartlXTpdu7VEmK0VYRLV8PgqFSlc7bWF2UZqww8/DhGK+DK739lPxQTOjWQ1ziHudEAIfQaT52tCAw6zsO+8sntg==',key_name='tempest-TestNetworkAdvancedServerOps-1403441276',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-sncacrhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:11Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=27bb49e9-1b5b-452b-89e4-21008913f536,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.910 233728 DEBUG nova.network.os_vif_util [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.910 233728 DEBUG nova.network.os_vif_util [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.911 233728 DEBUG os_vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.912 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.912 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.912 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.915 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.915 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d2b1b3c-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.916 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d2b1b3c-a9, col_values=(('external_ids', {'iface-id': '1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:69:95', 'vm-uuid': '27bb49e9-1b5b-452b-89e4-21008913f536'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.917 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539552 NetworkManager[48926]: <info>  [1764404617.9181] manager: (tap1d2b1b3c-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.921 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.924 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.925 233728 INFO os_vif [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9')#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.926 233728 DEBUG nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 03:23:37 np0005539552 nova_compute[233724]: 2025-11-29 08:23:37.926 233728 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='27bb49e9-1b5b-452b-89e4-21008913f536',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 03:23:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:38.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:38 np0005539552 nova_compute[233724]: 2025-11-29 08:23:38.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:38 np0005539552 nova_compute[233724]: 2025-11-29 08:23:38.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:38 np0005539552 nova_compute[233724]: 2025-11-29 08:23:38.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:23:38 np0005539552 nova_compute[233724]: 2025-11-29 08:23:38.945 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:23:38 np0005539552 nova_compute[233724]: 2025-11-29 08:23:38.946 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:38 np0005539552 nova_compute[233724]: 2025-11-29 08:23:38.946 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:23:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1821378413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:23:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:23:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1821378413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:23:39 np0005539552 nova_compute[233724]: 2025-11-29 08:23:39.357 233728 DEBUG nova.network.neutron [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 03:23:39 np0005539552 nova_compute[233724]: 2025-11-29 08:23:39.359 233728 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxu9nn7ze',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='27bb49e9-1b5b-452b-89e4-21008913f536',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 03:23:39 np0005539552 systemd[1]: Starting libvirt proxy daemon...
Nov 29 03:23:39 np0005539552 systemd[1]: Started libvirt proxy daemon.
Nov 29 03:23:39 np0005539552 kernel: tap1d2b1b3c-a9: entered promiscuous mode
Nov 29 03:23:39 np0005539552 NetworkManager[48926]: <info>  [1764404619.6325] manager: (tap1d2b1b3c-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Nov 29 03:23:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:39Z|00555|binding|INFO|Claiming lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for this additional chassis.
Nov 29 03:23:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:39Z|00556|binding|INFO|1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4: Claiming fa:16:3e:32:69:95 10.100.0.8
Nov 29 03:23:39 np0005539552 nova_compute[233724]: 2025-11-29 08:23:39.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:39Z|00557|binding|INFO|Setting lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 ovn-installed in OVS
Nov 29 03:23:39 np0005539552 nova_compute[233724]: 2025-11-29 08:23:39.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539552 nova_compute[233724]: 2025-11-29 08:23:39.656 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539552 systemd-udevd[287194]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:39 np0005539552 systemd-machined[196379]: New machine qemu-55-instance-0000007e.
Nov 29 03:23:39 np0005539552 NetworkManager[48926]: <info>  [1764404619.6828] device (tap1d2b1b3c-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:39 np0005539552 NetworkManager[48926]: <info>  [1764404619.6837] device (tap1d2b1b3c-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:39 np0005539552 systemd[1]: Started Virtual Machine qemu-55-instance-0000007e.
Nov 29 03:23:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:40 np0005539552 nova_compute[233724]: 2025-11-29 08:23:40.582 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404620.5819461, 27bb49e9-1b5b-452b-89e4-21008913f536 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:40 np0005539552 nova_compute[233724]: 2025-11-29 08:23:40.583 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:40 np0005539552 nova_compute[233724]: 2025-11-29 08:23:40.604 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:40.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:40 np0005539552 podman[287248]: 2025-11-29 08:23:40.983958873 +0000 UTC m=+0.064256720 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:40 np0005539552 podman[287247]: 2025-11-29 08:23:40.988511756 +0000 UTC m=+0.069054889 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:23:41 np0005539552 podman[287249]: 2025-11-29 08:23:41.011097643 +0000 UTC m=+0.090011832 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:41 np0005539552 nova_compute[233724]: 2025-11-29 08:23:41.085 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404621.0842261, 27bb49e9-1b5b-452b-89e4-21008913f536 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:41 np0005539552 nova_compute[233724]: 2025-11-29 08:23:41.085 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:41 np0005539552 nova_compute[233724]: 2025-11-29 08:23:41.103 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:41 np0005539552 nova_compute[233724]: 2025-11-29 08:23:41.107 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:41 np0005539552 nova_compute[233724]: 2025-11-29 08:23:41.128 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Nov 29 03:23:41 np0005539552 nova_compute[233724]: 2025-11-29 08:23:41.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:41.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:42Z|00558|binding|INFO|Claiming lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for this chassis.
Nov 29 03:23:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:42Z|00559|binding|INFO|1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4: Claiming fa:16:3e:32:69:95 10.100.0.8
Nov 29 03:23:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:42Z|00560|binding|INFO|Setting lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 up in Southbound
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.663 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:95 10.100.0.8'], port_security=['fa:16:3e:32:69:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '27bb49e9-1b5b-452b-89e4-21008913f536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ca7446-a7cc-4230-a5a5-4c818b881403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '11', 'neutron:security_group_ids': '29b2a720-5603-492e-b672-0c12c21d24cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ee4ab43-1884-4c9a-b7dc-aa4995f42087, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.664 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 in datapath 36ca7446-a7cc-4230-a5a5-4c818b881403 bound to our chassis#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.666 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36ca7446-a7cc-4230-a5a5-4c818b881403#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.676 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc9dfeb-a886-4808-84cb-53aa267f4fa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.677 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap36ca7446-a1 in ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.679 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap36ca7446-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.679 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b2aef95f-32f6-4f03-a25a-cbf63245d423]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.680 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[623ac6ba-00ae-4560-b5c4-991afe366870]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.694 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b268d822-437c-449e-9a5c-88088fdc0a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.719 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b91dc8f-5315-4347-9b57-09cfdf7ec601]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:42.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.752 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4e24d527-5903-416a-8539-faa2d9f88646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 NetworkManager[48926]: <info>  [1764404622.7595] manager: (tap36ca7446-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.759 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9b0658-efbc-43c7-bb54-284a724dffae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.791 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2a95db0e-f25d-4ee8-b2e9-b76419c07e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 systemd-udevd[287318]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.794 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4425c08e-7005-42a9-a722-bf9c544d688e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 NetworkManager[48926]: <info>  [1764404622.8240] device (tap36ca7446-a0): carrier: link connected
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.829 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0f90e0-1a6d-4961-9a47-8ae954e8bf8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.848 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea0abaa-c260-4f71-8067-d4873f2d72db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36ca7446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:32:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766851, 'reachable_time': 42574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287337, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.864 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0a55ff-24c6-4e8a-b9b3-02b4c1e0c1c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:3266'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766851, 'tstamp': 766851}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287338, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.880 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2a5b95-e081-495b-ba93-b307ef4812f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36ca7446-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:32:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766851, 'reachable_time': 42574, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287339, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.911 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[74509d9c-aa30-42dc-806a-0eaaa7d3770d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 nova_compute[233724]: 2025-11-29 08:23:42.918 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.965 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbddd3d-72c6-4afd-be79-857ce41a8dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.966 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36ca7446-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.966 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.966 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36ca7446-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:42 np0005539552 nova_compute[233724]: 2025-11-29 08:23:42.968 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539552 NetworkManager[48926]: <info>  [1764404622.9689] manager: (tap36ca7446-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Nov 29 03:23:42 np0005539552 kernel: tap36ca7446-a0: entered promiscuous mode
Nov 29 03:23:42 np0005539552 nova_compute[233724]: 2025-11-29 08:23:42.970 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.971 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36ca7446-a0, col_values=(('external_ids', {'iface-id': 'f0d0f672-ab06-43e3-bb90-8353b7804006'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:42 np0005539552 nova_compute[233724]: 2025-11-29 08:23:42.972 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:42Z|00561|binding|INFO|Releasing lport f0d0f672-ab06-43e3-bb90-8353b7804006 from this chassis (sb_readonly=0)
Nov 29 03:23:42 np0005539552 nova_compute[233724]: 2025-11-29 08:23:42.987 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.988 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/36ca7446-a7cc-4230-a5a5-4c818b881403.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/36ca7446-a7cc-4230-a5a5-4c818b881403.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.989 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1af6172b-3a2c-42da-b8d0-2a3ddbaa6666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.990 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-36ca7446-a7cc-4230-a5a5-4c818b881403
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/36ca7446-a7cc-4230-a5a5-4c818b881403.pid.haproxy
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 36ca7446-a7cc-4230-a5a5-4c818b881403
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:42.991 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'env', 'PROCESS_TAG=haproxy-36ca7446-a7cc-4230-a5a5-4c818b881403', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/36ca7446-a7cc-4230-a5a5-4c818b881403.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:43 np0005539552 podman[287372]: 2025-11-29 08:23:43.383837344 +0000 UTC m=+0.064527717 container create 6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:23:43 np0005539552 systemd[1]: Started libpod-conmon-6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1.scope.
Nov 29 03:23:43 np0005539552 podman[287372]: 2025-11-29 08:23:43.347228229 +0000 UTC m=+0.027918712 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:43 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:23:43 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6450b03394cea53b5afdfaaf0e9a4e58ed42a4287d933bc23a9fcfad86fd9f05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:43 np0005539552 podman[287372]: 2025-11-29 08:23:43.480498245 +0000 UTC m=+0.161188618 container init 6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:23:43 np0005539552 podman[287372]: 2025-11-29 08:23:43.486739533 +0000 UTC m=+0.167429896 container start 6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:43 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [NOTICE]   (287391) : New worker (287393) forked
Nov 29 03:23:43 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [NOTICE]   (287391) : Loading success.
Nov 29 03:23:43 np0005539552 nova_compute[233724]: 2025-11-29 08:23:43.553 233728 INFO nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Post operation of migration started#033[00m
Nov 29 03:23:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:43.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:43 np0005539552 nova_compute[233724]: 2025-11-29 08:23:43.928 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:43 np0005539552 nova_compute[233724]: 2025-11-29 08:23:43.928 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:43 np0005539552 nova_compute[233724]: 2025-11-29 08:23:43.929 233728 DEBUG nova.network.neutron [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:45 np0005539552 nova_compute[233724]: 2025-11-29 08:23:45.615 233728 DEBUG nova.network.neutron [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:45 np0005539552 nova_compute[233724]: 2025-11-29 08:23:45.657 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:45 np0005539552 nova_compute[233724]: 2025-11-29 08:23:45.681 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:45 np0005539552 nova_compute[233724]: 2025-11-29 08:23:45.681 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:45 np0005539552 nova_compute[233724]: 2025-11-29 08:23:45.682 233728 DEBUG oslo_concurrency.lockutils [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:45 np0005539552 nova_compute[233724]: 2025-11-29 08:23:45.687 233728 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 03:23:45 np0005539552 virtqemud[233098]: Domain id=55 name='instance-0000007e' uuid=27bb49e9-1b5b-452b-89e4-21008913f536 is tainted: custom-monitor
Nov 29 03:23:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:46 np0005539552 nova_compute[233724]: 2025-11-29 08:23:46.474 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:46 np0005539552 nova_compute[233724]: 2025-11-29 08:23:46.697 233728 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 03:23:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:46.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:47Z|00562|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:23:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:47Z|00563|binding|INFO|Releasing lport f0d0f672-ab06-43e3-bb90-8353b7804006 from this chassis (sb_readonly=0)
Nov 29 03:23:47 np0005539552 nova_compute[233724]: 2025-11-29 08:23:47.486 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539552 nova_compute[233724]: 2025-11-29 08:23:47.703 233728 INFO nova.virt.libvirt.driver [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 03:23:47 np0005539552 nova_compute[233724]: 2025-11-29 08:23:47.707 233728 DEBUG nova.compute.manager [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:47 np0005539552 nova_compute[233724]: 2025-11-29 08:23:47.723 233728 DEBUG nova.objects.instance [None req-5f5b4cb1-1177-4efe-a61b-123a044cb5b7 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:23:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:47.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:47 np0005539552 nova_compute[233724]: 2025-11-29 08:23:47.920 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2905225700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:48.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:49.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:50.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:51 np0005539552 nova_compute[233724]: 2025-11-29 08:23:51.476 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:51.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.328246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632328441, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1584, "num_deletes": 260, "total_data_size": 3322357, "memory_usage": 3365680, "flush_reason": "Manual Compaction"}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632440724, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 2179333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49471, "largest_seqno": 51049, "table_properties": {"data_size": 2172780, "index_size": 3624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14730, "raw_average_key_size": 20, "raw_value_size": 2159196, "raw_average_value_size": 2949, "num_data_blocks": 160, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404512, "oldest_key_time": 1764404512, "file_creation_time": 1764404632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 112533 microseconds, and 5372 cpu microseconds.
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.440789) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 2179333 bytes OK
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.440807) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.443755) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.443807) EVENT_LOG_v1 {"time_micros": 1764404632443796, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.443834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 3314957, prev total WAL file size 3314957, number of live WAL files 2.
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.444756) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373630' seq:0, type:0; will stop at (end)
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(2128KB)], [96(9941KB)]
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632444811, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 12359881, "oldest_snapshot_seqno": -1}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 8219 keys, 12216345 bytes, temperature: kUnknown
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632532554, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 12216345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12161971, "index_size": 32694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 213558, "raw_average_key_size": 25, "raw_value_size": 12015812, "raw_average_value_size": 1461, "num_data_blocks": 1281, "num_entries": 8219, "num_filter_entries": 8219, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.532818) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 12216345 bytes
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.534210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.7 rd, 139.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.7 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 8755, records dropped: 536 output_compression: NoCompression
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.534224) EVENT_LOG_v1 {"time_micros": 1764404632534217, "job": 60, "event": "compaction_finished", "compaction_time_micros": 87850, "compaction_time_cpu_micros": 30085, "output_level": 6, "num_output_files": 1, "total_output_size": 12216345, "num_input_records": 8755, "num_output_records": 8219, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632534654, "job": 60, "event": "table_file_deletion", "file_number": 98}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632536345, "job": 60, "event": "table_file_deletion", "file_number": 96}
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.444693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.536400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.536406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.536407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.536409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:23:52.536410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:52.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:52 np0005539552 nova_compute[233724]: 2025-11-29 08:23:52.922 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:53.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:54.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:55 np0005539552 nova_compute[233724]: 2025-11-29 08:23:55.190 233728 INFO nova.compute.manager [None req-27647c49-d6c7-43d7-8a3b-2eaece2426e5 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Get console output#033[00m
Nov 29 03:23:55 np0005539552 nova_compute[233724]: 2025-11-29 08:23:55.194 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:23:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:55.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:56 np0005539552 nova_compute[233724]: 2025-11-29 08:23:56.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:57.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.924 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.925 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.925 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.926 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.926 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.926 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:57 np0005539552 nova_compute[233724]: 2025-11-29 08:23:57.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.176 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.273 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.273 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Image id 4873db8c-b414-4e95-acd9-77caabebe722 yields fingerprint f62ef5f82502d01c82174408aec7f3ac942e2488 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.274 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] image 4873db8c-b414-4e95-acd9-77caabebe722 at (/var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488): checking#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.274 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] image 4873db8c-b414-4e95-acd9-77caabebe722 at (/var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.276 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.276 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] 07f760bf-6984-45e9-8e85-3d297e812553 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.276 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] 27bb49e9-1b5b-452b-89e4-21008913f536 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.276 233728 WARNING nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.277 233728 WARNING nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.277 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Active base files: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.277 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Removable base files: /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.277 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.278 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.278 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.278 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.278 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:23:58 np0005539552 nova_compute[233724]: 2025-11-29 08:23:58.278 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 29 03:23:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:58.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.173 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.173 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.174 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.174 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.175 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.177 233728 INFO nova.compute.manager [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Terminating instance#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.179 233728 DEBUG nova.compute.manager [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:59 np0005539552 kernel: tap1d2b1b3c-a9 (unregistering): left promiscuous mode
Nov 29 03:23:59 np0005539552 NetworkManager[48926]: <info>  [1764404639.5050] device (tap1d2b1b3c-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:59Z|00564|binding|INFO|Releasing lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 from this chassis (sb_readonly=0)
Nov 29 03:23:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:59Z|00565|binding|INFO|Setting lport 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 down in Southbound
Nov 29 03:23:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:23:59Z|00566|binding|INFO|Removing iface tap1d2b1b3c-a9 ovn-installed in OVS
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.525 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:69:95 10.100.0.8'], port_security=['fa:16:3e:32:69:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '27bb49e9-1b5b-452b-89e4-21008913f536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ca7446-a7cc-4230-a5a5-4c818b881403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '13', 'neutron:security_group_ids': '29b2a720-5603-492e-b672-0c12c21d24cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ee4ab43-1884-4c9a-b7dc-aa4995f42087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.526 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 in datapath 36ca7446-a7cc-4230-a5a5-4c818b881403 unbound from our chassis#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.527 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36ca7446-a7cc-4230-a5a5-4c818b881403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.529 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a372e124-c81b-41cb-bc37-9e814fc9417a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.529 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 namespace which is not needed anymore#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.543 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 29 03:23:59 np0005539552 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007e.scope: Consumed 2.137s CPU time.
Nov 29 03:23:59 np0005539552 systemd-machined[196379]: Machine qemu-55-instance-0000007e terminated.
Nov 29 03:23:59 np0005539552 kernel: tap1d2b1b3c-a9: entered promiscuous mode
Nov 29 03:23:59 np0005539552 kernel: tap1d2b1b3c-a9 (unregistering): left promiscuous mode
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.610 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.621 233728 INFO nova.virt.libvirt.driver [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Instance destroyed successfully.#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.621 233728 DEBUG nova.objects.instance [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid 27bb49e9-1b5b-452b-89e4-21008913f536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.642 233728 DEBUG nova.virt.libvirt.vif [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-265826808',display_name='tempest-TestNetworkAdvancedServerOps-server-265826808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-265826808',id=126,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeXTTUn8A7zpFHcqlGQ90V4zAP7o75tcD3W3n7dQTartlXTpdu7VEmK0VYRLV8PgqFSlc7bWF2UZqww8/DhGK+DK739lPxQTOjWQ1ziHudEAIfQaT52tCAw6zsO+8sntg==',key_name='tempest-TestNetworkAdvancedServerOps-1403441276',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-sncacrhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:47Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=27bb49e9-1b5b-452b-89e4-21008913f536,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.642 233728 DEBUG nova.network.os_vif_util [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.643 233728 DEBUG nova.network.os_vif_util [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.643 233728 DEBUG os_vif [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.645 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.645 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d2b1b3c-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.647 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.649 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.652 233728 INFO os_vif [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:69:95,bridge_name='br-int',has_traffic_filtering=True,id=1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4,network=Network(36ca7446-a7cc-4230-a5a5-4c818b881403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d2b1b3c-a9')#033[00m
Nov 29 03:23:59 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [NOTICE]   (287391) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:59 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [NOTICE]   (287391) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:59 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [WARNING]  (287391) : Exiting Master process...
Nov 29 03:23:59 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [WARNING]  (287391) : Exiting Master process...
Nov 29 03:23:59 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [ALERT]    (287391) : Current worker (287393) exited with code 143 (Terminated)
Nov 29 03:23:59 np0005539552 neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403[287387]: [WARNING]  (287391) : All workers exited. Exiting... (0)
Nov 29 03:23:59 np0005539552 systemd[1]: libpod-6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1.scope: Deactivated successfully.
Nov 29 03:23:59 np0005539552 podman[287502]: 2025-11-29 08:23:59.686973246 +0000 UTC m=+0.045548216 container died 6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:23:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay-6450b03394cea53b5afdfaaf0e9a4e58ed42a4287d933bc23a9fcfad86fd9f05-merged.mount: Deactivated successfully.
Nov 29 03:23:59 np0005539552 podman[287502]: 2025-11-29 08:23:59.731483934 +0000 UTC m=+0.090058914 container cleanup 6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:59 np0005539552 systemd[1]: libpod-conmon-6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1.scope: Deactivated successfully.
Nov 29 03:23:59 np0005539552 podman[287549]: 2025-11-29 08:23:59.805355211 +0000 UTC m=+0.050546041 container remove 6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.812 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcc855b-85d3-48cb-ab43-6310aacb47f5]: (4, ('Sat Nov 29 08:23:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 (6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1)\n6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1\nSat Nov 29 08:23:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 (6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1)\n6bced42f5f157953c00fdcab83544f0acc15e7ab1f4f012f2b12e035f9d49ee1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.814 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1b2c87-b8d5-4f79-9d8a-9ba35c91a155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.815 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36ca7446-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 kernel: tap36ca7446-a0: left promiscuous mode
Nov 29 03:23:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:23:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:59.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.832 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.833 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8453f4-e526-4d1e-8a0e-4c156e783b28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.848 233728 DEBUG nova.compute.manager [req-a7955e4b-e9bb-420a-8b32-3c1cc5622631 req-a512e9b3-47c5-4974-99ab-dd396a704993 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.849 233728 DEBUG oslo_concurrency.lockutils [req-a7955e4b-e9bb-420a-8b32-3c1cc5622631 req-a512e9b3-47c5-4974-99ab-dd396a704993 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.849 233728 DEBUG oslo_concurrency.lockutils [req-a7955e4b-e9bb-420a-8b32-3c1cc5622631 req-a512e9b3-47c5-4974-99ab-dd396a704993 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.849 233728 DEBUG oslo_concurrency.lockutils [req-a7955e4b-e9bb-420a-8b32-3c1cc5622631 req-a512e9b3-47c5-4974-99ab-dd396a704993 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.849 233728 DEBUG nova.compute.manager [req-a7955e4b-e9bb-420a-8b32-3c1cc5622631 req-a512e9b3-47c5-4974-99ab-dd396a704993 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:59 np0005539552 nova_compute[233724]: 2025-11-29 08:23:59.849 233728 DEBUG nova.compute.manager [req-a7955e4b-e9bb-420a-8b32-3c1cc5622631 req-a512e9b3-47c5-4974-99ab-dd396a704993 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-unplugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.857 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[03fa64c6-a627-45fa-ad69-d69d62f335b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.858 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd85a25-c475-46c1-a4c7-c98c57d9bfb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b13bfc-5461-4e92-956a-3983c505986a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766843, 'reachable_time': 37189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287563, 'error': None, 'target': 'ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.876 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-36ca7446-a7cc-4230-a5a5-4c818b881403 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:23:59.877 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[acda3606-e0b3-4e30-9544-a30425e22f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:59 np0005539552 systemd[1]: run-netns-ovnmeta\x2d36ca7446\x2da7cc\x2d4230\x2da5a5\x2d4c818b881403.mount: Deactivated successfully.
Nov 29 03:24:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:00.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.156 233728 INFO nova.virt.libvirt.driver [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Deleting instance files /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536_del#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.157 233728 INFO nova.virt.libvirt.driver [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Deletion of /var/lib/nova/instances/27bb49e9-1b5b-452b-89e4-21008913f536_del complete#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.804 233728 DEBUG nova.compute.manager [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.805 233728 DEBUG nova.compute.manager [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing instance network info cache due to event network-changed-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.805 233728 DEBUG oslo_concurrency.lockutils [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.805 233728 DEBUG oslo_concurrency.lockutils [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.805 233728 DEBUG nova.network.neutron [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Refreshing network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:01.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.893 233728 INFO nova.compute.manager [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Took 2.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.894 233728 DEBUG oslo.service.loopingcall [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.894 233728 DEBUG nova.compute.manager [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:24:01 np0005539552 nova_compute[233724]: 2025-11-29 08:24:01.894 233728 DEBUG nova.network.neutron [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:24:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:02.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 03:24:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:03 np0005539552 nova_compute[233724]: 2025-11-29 08:24:03.255 233728 DEBUG nova.compute.manager [req-20afdecb-b377-4bfb-a80e-cc5d247aa470 req-fe509a96-9cc4-4884-82b2-20274f6efc92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:03 np0005539552 nova_compute[233724]: 2025-11-29 08:24:03.256 233728 DEBUG oslo_concurrency.lockutils [req-20afdecb-b377-4bfb-a80e-cc5d247aa470 req-fe509a96-9cc4-4884-82b2-20274f6efc92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539552 nova_compute[233724]: 2025-11-29 08:24:03.257 233728 DEBUG oslo_concurrency.lockutils [req-20afdecb-b377-4bfb-a80e-cc5d247aa470 req-fe509a96-9cc4-4884-82b2-20274f6efc92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539552 nova_compute[233724]: 2025-11-29 08:24:03.257 233728 DEBUG oslo_concurrency.lockutils [req-20afdecb-b377-4bfb-a80e-cc5d247aa470 req-fe509a96-9cc4-4884-82b2-20274f6efc92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:03 np0005539552 nova_compute[233724]: 2025-11-29 08:24:03.257 233728 DEBUG nova.compute.manager [req-20afdecb-b377-4bfb-a80e-cc5d247aa470 req-fe509a96-9cc4-4884-82b2-20274f6efc92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] No waiting events found dispatching network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:03 np0005539552 nova_compute[233724]: 2025-11-29 08:24:03.258 233728 WARNING nova.compute.manager [req-20afdecb-b377-4bfb-a80e-cc5d247aa470 req-fe509a96-9cc4-4884-82b2-20274f6efc92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received unexpected event network-vif-plugged-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:24:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:03.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:04 np0005539552 nova_compute[233724]: 2025-11-29 08:24:04.650 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:04.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:05.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.472 233728 DEBUG nova.network.neutron [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.481 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.490 233728 INFO nova.compute.manager [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Took 4.60 seconds to deallocate network for instance.#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.529 233728 DEBUG nova.compute.manager [req-8119f4f2-dea3-4fdb-a8d3-6710ad2b0947 req-76eb3b01-5dc0-4ba3-9106-ee57ac25320f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Received event network-vif-deleted-1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.546 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.546 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.551 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:06 np0005539552 nova_compute[233724]: 2025-11-29 08:24:06.586 233728 INFO nova.scheduler.client.report [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance 27bb49e9-1b5b-452b-89e4-21008913f536#033[00m
Nov 29 03:24:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:07.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 03:24:08 np0005539552 nova_compute[233724]: 2025-11-29 08:24:08.581 233728 DEBUG nova.network.neutron [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updated VIF entry in instance network info cache for port 1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:08 np0005539552 nova_compute[233724]: 2025-11-29 08:24:08.582 233728 DEBUG nova.network.neutron [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Updating instance_info_cache with network_info: [{"id": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "address": "fa:16:3e:32:69:95", "network": {"id": "36ca7446-a7cc-4230-a5a5-4c818b881403", "bridge": "br-int", "label": "tempest-network-smoke--1073048915", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d2b1b3c-a9", "ovs_interfaceid": "1d2b1b3c-a9e1-472c-a52c-1f54e5de6ac4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:08.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:08 np0005539552 nova_compute[233724]: 2025-11-29 08:24:08.936 233728 DEBUG oslo_concurrency.lockutils [req-95916cc5-afc5-46f1-bda8-c1538bbd95e0 req-bcad207c-fa6c-414b-871c-ef865257ac04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-27bb49e9-1b5b-452b-89e4-21008913f536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:09 np0005539552 nova_compute[233724]: 2025-11-29 08:24:09.010 233728 DEBUG oslo_concurrency.lockutils [None req-3d1ddf68-c55a-4d3c-8930-83f88746bff9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "27bb49e9-1b5b-452b-89e4-21008913f536" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:09 np0005539552 nova_compute[233724]: 2025-11-29 08:24:09.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:10.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:11 np0005539552 nova_compute[233724]: 2025-11-29 08:24:11.484 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:11.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:11 np0005539552 podman[287752]: 2025-11-29 08:24:11.965812404 +0000 UTC m=+0.058669549 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:24:11 np0005539552 podman[287753]: 2025-11-29 08:24:11.975612978 +0000 UTC m=+0.062314838 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:24:12 np0005539552 podman[287754]: 2025-11-29 08:24:12.039714322 +0000 UTC m=+0.124245203 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:12.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:13 np0005539552 nova_compute[233724]: 2025-11-29 08:24:13.269 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:13.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.059 233728 DEBUG nova.compute.manager [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-changed-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.059 233728 DEBUG nova.compute.manager [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Refreshing instance network info cache due to event network-changed-5511e511-2310-4811-8313-3722fcf49758. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.059 233728 DEBUG oslo_concurrency.lockutils [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.060 233728 DEBUG oslo_concurrency.lockutils [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.060 233728 DEBUG nova.network.neutron [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Refreshing network info cache for port 5511e511-2310-4811-8313-3722fcf49758 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.618 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404639.6183803, 27bb49e9-1b5b-452b-89e4-21008913f536 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.619 233728 INFO nova.compute.manager [-] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.641 233728 DEBUG nova.compute.manager [None req-d7b55f13-6042-4ce1-840c-374d67128d3a - - - - - -] [instance: 27bb49e9-1b5b-452b-89e4-21008913f536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:14 np0005539552 nova_compute[233724]: 2025-11-29 08:24:14.656 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:14.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:16 np0005539552 nova_compute[233724]: 2025-11-29 08:24:16.133 233728 DEBUG nova.network.neutron [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updated VIF entry in instance network info cache for port 5511e511-2310-4811-8313-3722fcf49758. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:16 np0005539552 nova_compute[233724]: 2025-11-29 08:24:16.134 233728 DEBUG nova.network.neutron [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:16 np0005539552 nova_compute[233724]: 2025-11-29 08:24:16.158 233728 DEBUG oslo_concurrency.lockutils [req-7c61771f-6740-46eb-a7c5-689c91e21b36 req-c989c264-481e-489a-bf3b-fd6022a83328 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:16 np0005539552 nova_compute[233724]: 2025-11-29 08:24:16.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:16.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 03:24:17 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:17Z|00567|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:24:17 np0005539552 nova_compute[233724]: 2025-11-29 08:24:17.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:18.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.658 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.777 233728 DEBUG nova.compute.manager [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:24:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:19.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.897 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.898 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.934 233728 DEBUG nova.objects.instance [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.962 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.963 233728 INFO nova.compute.claims [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.963 233728 DEBUG nova.objects.instance [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:19 np0005539552 nova_compute[233724]: 2025-11-29 08:24:19.986 233728 DEBUG nova.objects.instance [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.039 233728 INFO nova.compute.resource_tracker [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating resource usage from migration c6911945-fb45-4d46-8522-b58cdea15a0b#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.040 233728 DEBUG nova.compute.resource_tracker [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Starting to track incoming migration c6911945-fb45-4d46-8522-b58cdea15a0b with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.137 233728 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3003832119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.584 233728 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.594 233728 DEBUG nova.compute.provider_tree [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.620 233728 DEBUG nova.scheduler.client.report [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:20.631 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:20.631 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:20.632 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.649 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:20 np0005539552 nova_compute[233724]: 2025-11-29 08:24:20.649 233728 INFO nova.compute.manager [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Migrating#033[00m
Nov 29 03:24:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:20.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:21 np0005539552 nova_compute[233724]: 2025-11-29 08:24:21.489 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:21.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:22.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:23.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:24 np0005539552 systemd-logind[788]: New session 68 of user nova.
Nov 29 03:24:24 np0005539552 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:24:24 np0005539552 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:24:24 np0005539552 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:24:24 np0005539552 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:24:24 np0005539552 systemd[287897]: Queued start job for default target Main User Target.
Nov 29 03:24:24 np0005539552 systemd[287897]: Created slice User Application Slice.
Nov 29 03:24:24 np0005539552 systemd[287897]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:24:24 np0005539552 systemd[287897]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:24:24 np0005539552 systemd[287897]: Reached target Paths.
Nov 29 03:24:24 np0005539552 systemd[287897]: Reached target Timers.
Nov 29 03:24:24 np0005539552 systemd[287897]: Starting D-Bus User Message Bus Socket...
Nov 29 03:24:24 np0005539552 systemd[287897]: Starting Create User's Volatile Files and Directories...
Nov 29 03:24:24 np0005539552 systemd[287897]: Finished Create User's Volatile Files and Directories.
Nov 29 03:24:24 np0005539552 systemd[287897]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:24:24 np0005539552 systemd[287897]: Reached target Sockets.
Nov 29 03:24:24 np0005539552 systemd[287897]: Reached target Basic System.
Nov 29 03:24:24 np0005539552 systemd[287897]: Reached target Main User Target.
Nov 29 03:24:24 np0005539552 systemd[287897]: Startup finished in 137ms.
Nov 29 03:24:24 np0005539552 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:24:24 np0005539552 systemd[1]: Started Session 68 of User nova.
Nov 29 03:24:24 np0005539552 systemd[1]: session-68.scope: Deactivated successfully.
Nov 29 03:24:24 np0005539552 systemd-logind[788]: Session 68 logged out. Waiting for processes to exit.
Nov 29 03:24:24 np0005539552 systemd-logind[788]: Removed session 68.
Nov 29 03:24:24 np0005539552 systemd-logind[788]: New session 70 of user nova.
Nov 29 03:24:24 np0005539552 systemd[1]: Started Session 70 of User nova.
Nov 29 03:24:24 np0005539552 systemd[1]: session-70.scope: Deactivated successfully.
Nov 29 03:24:24 np0005539552 systemd-logind[788]: Session 70 logged out. Waiting for processes to exit.
Nov 29 03:24:24 np0005539552 systemd-logind[788]: Removed session 70.
Nov 29 03:24:24 np0005539552 nova_compute[233724]: 2025-11-29 08:24:24.662 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.108 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.109 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.143 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.228 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.229 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.235 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.235 233728 INFO nova.compute.claims [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.441 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1246107364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.909 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.917 233728 DEBUG nova.compute.provider_tree [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.940 233728 DEBUG nova.scheduler.client.report [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.967 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:25 np0005539552 nova_compute[233724]: 2025-11-29 08:24:25.968 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.022 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.022 233728 DEBUG nova.network.neutron [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.046 233728 INFO nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.069 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.182 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.184 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.184 233728 INFO nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Creating image(s)#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.209 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.240 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.270 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.275 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.367 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.369 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.370 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.371 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.415 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.420 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 4cfb409d-50a4-4256-83f3-6aef192ab489_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.484 233728 DEBUG nova.policy [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd49b9d546af249fc80268c1bdd5884f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '252fdae97c7e4dec97ac9a23d72747cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:24:26 np0005539552 nova_compute[233724]: 2025-11-29 08:24:26.492 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:26.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539552 nova_compute[233724]: 2025-11-29 08:24:27.385 233728 DEBUG nova.network.neutron [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Successfully created port: d090c296-9e92-42d2-9d11-5d9eb058d748 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:24:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:27.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:28.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:28 np0005539552 nova_compute[233724]: 2025-11-29 08:24:28.866 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:28 np0005539552 nova_compute[233724]: 2025-11-29 08:24:28.904 233728 DEBUG nova.network.neutron [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Successfully updated port: d090c296-9e92-42d2-9d11-5d9eb058d748 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:24:28 np0005539552 nova_compute[233724]: 2025-11-29 08:24:28.929 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "refresh_cache-4cfb409d-50a4-4256-83f3-6aef192ab489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:28 np0005539552 nova_compute[233724]: 2025-11-29 08:24:28.929 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquired lock "refresh_cache-4cfb409d-50a4-4256-83f3-6aef192ab489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:28 np0005539552 nova_compute[233724]: 2025-11-29 08:24:28.929 233728 DEBUG nova.network.neutron [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:28 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 03:24:29 np0005539552 nova_compute[233724]: 2025-11-29 08:24:29.212 233728 DEBUG nova.network.neutron [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:24:29 np0005539552 nova_compute[233724]: 2025-11-29 08:24:29.665 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.188 233728 DEBUG nova.compute.manager [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-changed-d090c296-9e92-42d2-9d11-5d9eb058d748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.189 233728 DEBUG nova.compute.manager [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Refreshing instance network info cache due to event network-changed-d090c296-9e92-42d2-9d11-5d9eb058d748. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.189 233728 DEBUG oslo_concurrency.lockutils [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4cfb409d-50a4-4256-83f3-6aef192ab489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.444 233728 DEBUG nova.network.neutron [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Updating instance_info_cache with network_info: [{"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.482 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Releasing lock "refresh_cache-4cfb409d-50a4-4256-83f3-6aef192ab489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.483 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Instance network_info: |[{"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.483 233728 DEBUG oslo_concurrency.lockutils [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4cfb409d-50a4-4256-83f3-6aef192ab489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:30 np0005539552 nova_compute[233724]: 2025-11-29 08:24:30.483 233728 DEBUG nova.network.neutron [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Refreshing network info cache for port d090c296-9e92-42d2-9d11-5d9eb058d748 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:31 np0005539552 nova_compute[233724]: 2025-11-29 08:24:31.495 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.278 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.306 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.306 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.307 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.307 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.308 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/34878246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.721 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.805 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.805 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:32.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:32 np0005539552 ceph-mds[83636]: mds.beacon.cephfs.compute-2.mmoati missed beacon ack from the monitors
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.960 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.961 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4128MB free_disk=20.85146713256836GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.961 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:32 np0005539552 nova_compute[233724]: 2025-11-29 08:24:32.962 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.005 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration for instance 2e7e8742-c504-412d-82cf-4087bd745c3e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.012 233728 DEBUG nova.network.neutron [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Updated VIF entry in instance network info cache for port d090c296-9e92-42d2-9d11-5d9eb058d748. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.013 233728 DEBUG nova.network.neutron [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Updating instance_info_cache with network_info: [{"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.028 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating resource usage from migration c6911945-fb45-4d46-8522-b58cdea15a0b#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.028 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Starting to track incoming migration c6911945-fb45-4d46-8522-b58cdea15a0b with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.033 233728 DEBUG oslo_concurrency.lockutils [req-76896e00-102c-4938-8642-752b4193ed3c req-085fd102-d737-4017-87ed-99d25c1c542b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4cfb409d-50a4-4256-83f3-6aef192ab489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.058 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 07f760bf-6984-45e9-8e85-3d297e812553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.082 233728 WARNING nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 2e7e8742-c504-412d-82cf-4087bd745c3e has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.082 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 4cfb409d-50a4-4256-83f3-6aef192ab489 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.083 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.083 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.317 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/892500954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.805 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.812 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.832 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:33.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.869 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.869 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.870 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.870 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:24:33 np0005539552 nova_compute[233724]: 2025-11-29 08:24:33.978 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.400 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 4cfb409d-50a4-4256-83f3-6aef192ab489_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.980s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.476 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] resizing rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:24:34 np0005539552 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:24:34 np0005539552 systemd[287897]: Activating special unit Exit the Session...
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped target Main User Target.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped target Basic System.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped target Paths.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped target Sockets.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped target Timers.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:24:34 np0005539552 systemd[287897]: Closed D-Bus User Message Bus Socket.
Nov 29 03:24:34 np0005539552 systemd[287897]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:24:34 np0005539552 systemd[287897]: Removed slice User Application Slice.
Nov 29 03:24:34 np0005539552 systemd[287897]: Reached target Shutdown.
Nov 29 03:24:34 np0005539552 systemd[287897]: Finished Exit the Session.
Nov 29 03:24:34 np0005539552 systemd[287897]: Reached target Exit the Session.
Nov 29 03:24:34 np0005539552 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:24:34 np0005539552 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:24:34 np0005539552 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:24:34 np0005539552 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:24:34 np0005539552 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:24:34 np0005539552 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:24:34 np0005539552 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.695 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.704 233728 DEBUG nova.objects.instance [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lazy-loading 'migration_context' on Instance uuid 4cfb409d-50a4-4256-83f3-6aef192ab489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.718 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.718 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Ensure instance console log exists: /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.719 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.719 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.719 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.722 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Start _get_guest_xml network_info=[{"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.726 233728 WARNING nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.731 233728 DEBUG nova.virt.libvirt.host [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.732 233728 DEBUG nova.virt.libvirt.host [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.734 233728 DEBUG nova.virt.libvirt.host [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.734 233728 DEBUG nova.virt.libvirt.host [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.735 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.736 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.736 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.736 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.737 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.737 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.737 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.737 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.737 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.738 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.738 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.738 233728 DEBUG nova.virt.hardware [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:34 np0005539552 nova_compute[233724]: 2025-11-29 08:24:34.741 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3962211711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.162 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.194 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.198 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/114218688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.659 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.661 233728 DEBUG nova.virt.libvirt.vif [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-938624690',display_name='tempest-ServerMetadataNegativeTestJSON-server-938624690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-938624690',id=131,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='252fdae97c7e4dec97ac9a23d72747cc',ramdisk_id='',reservation_id='r-2jhpkygu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1540077054',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1540077054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:26Z,user_data=None,user_id='d49b9d546af249fc80268c1bdd5884f6',uuid=4cfb409d-50a4-4256-83f3-6aef192ab489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.662 233728 DEBUG nova.network.os_vif_util [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Converting VIF {"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.663 233728 DEBUG nova.network.os_vif_util [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.664 233728 DEBUG nova.objects.instance [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cfb409d-50a4-4256-83f3-6aef192ab489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.683 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <uuid>4cfb409d-50a4-4256-83f3-6aef192ab489</uuid>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <name>instance-00000083</name>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-938624690</nova:name>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:24:34</nova:creationTime>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:user uuid="d49b9d546af249fc80268c1bdd5884f6">tempest-ServerMetadataNegativeTestJSON-1540077054-project-member</nova:user>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:project uuid="252fdae97c7e4dec97ac9a23d72747cc">tempest-ServerMetadataNegativeTestJSON-1540077054</nova:project>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <nova:port uuid="d090c296-9e92-42d2-9d11-5d9eb058d748">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <entry name="serial">4cfb409d-50a4-4256-83f3-6aef192ab489</entry>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <entry name="uuid">4cfb409d-50a4-4256-83f3-6aef192ab489</entry>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/4cfb409d-50a4-4256-83f3-6aef192ab489_disk">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/4cfb409d-50a4-4256-83f3-6aef192ab489_disk.config">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:0b:2f:8f"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <target dev="tapd090c296-9e"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/console.log" append="off"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:24:35 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:24:35 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:24:35 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:24:35 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.684 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Preparing to wait for external event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.685 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.685 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.685 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.686 233728 DEBUG nova.virt.libvirt.vif [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-938624690',display_name='tempest-ServerMetadataNegativeTestJSON-server-938624690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-938624690',id=131,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='252fdae97c7e4dec97ac9a23d72747cc',ramdisk_id='',reservation_id='r-2jhpkygu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1540077054',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1540077054-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:26Z,user_data=None,user_id='d49b9d546af249fc80268c1bdd5884f6',uuid=4cfb409d-50a4-4256-83f3-6aef192ab489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.686 233728 DEBUG nova.network.os_vif_util [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Converting VIF {"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.687 233728 DEBUG nova.network.os_vif_util [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.687 233728 DEBUG os_vif [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.687 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.688 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.688 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.691 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.691 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd090c296-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.692 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd090c296-9e, col_values=(('external_ids', {'iface-id': 'd090c296-9e92-42d2-9d11-5d9eb058d748', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:2f:8f', 'vm-uuid': '4cfb409d-50a4-4256-83f3-6aef192ab489'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539552 NetworkManager[48926]: <info>  [1764404675.6946] manager: (tapd090c296-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.696 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.701 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.703 233728 INFO os_vif [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e')#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.760 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.760 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.761 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] No VIF found with MAC fa:16:3e:0b:2f:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.762 233728 INFO nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Using config drive#033[00m
Nov 29 03:24:35 np0005539552 nova_compute[233724]: 2025-11-29 08:24:35.793 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.409 233728 INFO nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Creating config drive at /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/disk.config#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.413 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxrr2ewx2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.497 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.551 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxrr2ewx2" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.578 233728 DEBUG nova.storage.rbd_utils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] rbd image 4cfb409d-50a4-4256-83f3-6aef192ab489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.582 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/disk.config 4cfb409d-50a4-4256-83f3-6aef192ab489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.756 233728 DEBUG nova.compute.manager [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.757 233728 DEBUG oslo_concurrency.lockutils [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.757 233728 DEBUG oslo_concurrency.lockutils [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.757 233728 DEBUG oslo_concurrency.lockutils [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.758 233728 DEBUG nova.compute.manager [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.758 233728 WARNING nova.compute.manager [req-de808072-f509-4bfd-aa6e-47962e6d14f1 req-c60648ef-580d-4c30-8c69-a5e0fe65d439 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:24:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:36 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:36.954 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:36 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:36.955 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:24:36 np0005539552 nova_compute[233724]: 2025-11-29 08:24:36.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.096 233728 DEBUG oslo_concurrency.processutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/disk.config 4cfb409d-50a4-4256-83f3-6aef192ab489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.098 233728 INFO nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Deleting local config drive /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489/disk.config because it was imported into RBD.#033[00m
Nov 29 03:24:37 np0005539552 kernel: tapd090c296-9e: entered promiscuous mode
Nov 29 03:24:37 np0005539552 NetworkManager[48926]: <info>  [1764404677.1463] manager: (tapd090c296-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Nov 29 03:24:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:37Z|00568|binding|INFO|Claiming lport d090c296-9e92-42d2-9d11-5d9eb058d748 for this chassis.
Nov 29 03:24:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:37Z|00569|binding|INFO|d090c296-9e92-42d2-9d11-5d9eb058d748: Claiming fa:16:3e:0b:2f:8f 10.100.0.10
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.153 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:2f:8f 10.100.0.10'], port_security=['fa:16:3e:0b:2f:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4cfb409d-50a4-4256-83f3-6aef192ab489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '252fdae97c7e4dec97ac9a23d72747cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec5ce994-6de9-4b70-abd4-89e60c4a4274', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dfbacd5-9971-4e30-ae61-a099b062f3b5, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d090c296-9e92-42d2-9d11-5d9eb058d748) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.154 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d090c296-9e92-42d2-9d11-5d9eb058d748 in datapath c2c49bb3-4207-40c0-a26d-1cede90f8b00 bound to our chassis#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.156 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c49bb3-4207-40c0-a26d-1cede90f8b00#033[00m
Nov 29 03:24:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:37Z|00570|binding|INFO|Setting lport d090c296-9e92-42d2-9d11-5d9eb058d748 ovn-installed in OVS
Nov 29 03:24:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:37Z|00571|binding|INFO|Setting lport d090c296-9e92-42d2-9d11-5d9eb058d748 up in Southbound
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.162 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.165 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.168 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[766fabf6-245b-4370-8a1e-1e2dd091eb1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.169 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2c49bb3-41 in ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.171 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2c49bb3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.171 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b923fa-680f-4c95-96f0-0156a951b48e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.172 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5ce7c0-d51a-4d26-9804-f3e7772ec815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 systemd-udevd[288301]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:37 np0005539552 systemd-machined[196379]: New machine qemu-56-instance-00000083.
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.183 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[79182127-7b68-47e1-8fdc-670a0996f4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 systemd[1]: Started Virtual Machine qemu-56-instance-00000083.
Nov 29 03:24:37 np0005539552 NetworkManager[48926]: <info>  [1764404677.1953] device (tapd090c296-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:37 np0005539552 NetworkManager[48926]: <info>  [1764404677.1962] device (tapd090c296-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.207 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b7a685-2f77-433f-9675-fd1ae033cf46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.232 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[51ed3aa5-ca78-4fcb-ac09-e08676b1e6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 NetworkManager[48926]: <info>  [1764404677.2382] manager: (tapc2c49bb3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.237 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7253fe70-01c1-4a8b-8d68-429399c49c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 systemd-udevd[288304]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.266 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6725f2-e377-4727-b394-92b2e07cef62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.269 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[50cddeb6-8b01-47fc-a559-c62408eb7ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 NetworkManager[48926]: <info>  [1764404677.2890] device (tapc2c49bb3-40): carrier: link connected
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.294 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f16b17d6-b86a-4f7e-a9e9-a36edfcbadec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.313 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e464ca52-ce08-42f1-9467-ed3a8ba7851e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c49bb3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:15:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772297, 'reachable_time': 41447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288332, 'error': None, 'target': 'ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.327 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2746da-51b1-4b64-a38d-20d29dec2cfb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:15c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772297, 'tstamp': 772297}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288333, 'error': None, 'target': 'ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.342 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fe033b12-789b-4560-bd9d-50d865dfde7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c49bb3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:15:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772297, 'reachable_time': 41447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288334, 'error': None, 'target': 'ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.373 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[02602f33-f261-4c27-8de1-8a93352fc53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.443 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1c87a4-5f8a-488e-8f31-94fcac26e9d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.445 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c49bb3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.445 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.446 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c49bb3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:37 np0005539552 kernel: tapc2c49bb3-40: entered promiscuous mode
Nov 29 03:24:37 np0005539552 NetworkManager[48926]: <info>  [1764404677.4500] manager: (tapc2c49bb3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.454 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c49bb3-40, col_values=(('external_ids', {'iface-id': '5a37e4ef-493a-45f9-b2f7-92440bfb7e06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.454 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:37Z|00572|binding|INFO|Releasing lport 5a37e4ef-493a-45f9-b2f7-92440bfb7e06 from this chassis (sb_readonly=0)
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.458 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2c49bb3-4207-40c0-a26d-1cede90f8b00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2c49bb3-4207-40c0-a26d-1cede90f8b00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.459 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b242a140-99f2-42b7-b883-4493b510ee68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.460 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-c2c49bb3-4207-40c0-a26d-1cede90f8b00
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/c2c49bb3-4207-40c0-a26d-1cede90f8b00.pid.haproxy
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID c2c49bb3-4207-40c0-a26d-1cede90f8b00
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:37.461 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'env', 'PROCESS_TAG=haproxy-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2c49bb3-4207-40c0-a26d-1cede90f8b00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.546 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.547 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.547 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.602 233728 DEBUG nova.compute.manager [req-93e8ed7d-e52f-4c4e-bbe9-0ed94d4c7bbf req-c6c84484-6779-4112-9cc8-0ad0b576dbcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.602 233728 DEBUG oslo_concurrency.lockutils [req-93e8ed7d-e52f-4c4e-bbe9-0ed94d4c7bbf req-c6c84484-6779-4112-9cc8-0ad0b576dbcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.602 233728 DEBUG oslo_concurrency.lockutils [req-93e8ed7d-e52f-4c4e-bbe9-0ed94d4c7bbf req-c6c84484-6779-4112-9cc8-0ad0b576dbcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.603 233728 DEBUG oslo_concurrency.lockutils [req-93e8ed7d-e52f-4c4e-bbe9-0ed94d4c7bbf req-c6c84484-6779-4112-9cc8-0ad0b576dbcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.603 233728 DEBUG nova.compute.manager [req-93e8ed7d-e52f-4c4e-bbe9-0ed94d4c7bbf req-c6c84484-6779-4112-9cc8-0ad0b576dbcc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Processing event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.800 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404677.8003297, 4cfb409d-50a4-4256-83f3-6aef192ab489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.801 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.803 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.807 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.810 233728 INFO nova.virt.libvirt.driver [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Instance spawned successfully.#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.810 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.827 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.833 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.837 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.837 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.837 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.838 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.838 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.839 233728 DEBUG nova.virt.libvirt.driver [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:37.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:37 np0005539552 podman[288408]: 2025-11-29 08:24:37.872883957 +0000 UTC m=+0.055882745 container create 0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.876 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.876 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404677.801315, 4cfb409d-50a4-4256-83f3-6aef192ab489 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.877 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:37 np0005539552 systemd[1]: Started libpod-conmon-0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a.scope.
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:37 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:24:37 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/182bd6212002960d3126f362db40d7dee52374d690a4677a56bd81a1716c0ab9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:37 np0005539552 podman[288408]: 2025-11-29 08:24:37.844302498 +0000 UTC m=+0.027301306 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:37 np0005539552 podman[288408]: 2025-11-29 08:24:37.947282909 +0000 UTC m=+0.130281717 container init 0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.951 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:37 np0005539552 podman[288408]: 2025-11-29 08:24:37.954337569 +0000 UTC m=+0.137336357 container start 0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.955 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404677.8056533, 4cfb409d-50a4-4256-83f3-6aef192ab489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.955 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:37 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [NOTICE]   (288426) : New worker (288428) forked
Nov 29 03:24:37 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [NOTICE]   (288426) : Loading success.
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.989 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.993 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:37 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.999 233728 INFO nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Took 11.82 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:37.999 233728 DEBUG nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.036 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.103 233728 INFO nova.compute.manager [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Took 12.90 seconds to build instance.#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.130 233728 DEBUG oslo_concurrency.lockutils [None req-554ce39b-b3e6-4506-af5e-5e6895133644 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.399 233728 INFO nova.network.neutron [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating port 270b7a06-5cdd-4855-a693-0b30baf78df7 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:24:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.909 233728 DEBUG nova.compute.manager [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.910 233728 DEBUG oslo_concurrency.lockutils [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.910 233728 DEBUG oslo_concurrency.lockutils [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.910 233728 DEBUG oslo_concurrency.lockutils [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.910 233728 DEBUG nova.compute.manager [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.911 233728 WARNING nova.compute.manager [req-c8e511b5-b6c2-4eec-919e-63660638d162 req-29d788f5-5a43-43cf-ba57-b9754070192d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539552 nova_compute[233724]: 2025-11-29 08:24:38.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:24:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390109656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:24:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:24:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390109656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.518 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.519 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.519 233728 DEBUG nova.network.neutron [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.787 233728 DEBUG nova.compute.manager [req-999dd2f5-2766-42bc-86b1-e36f7ae1d019 req-20d3ee74-08af-481f-806c-5fcf46c7b3a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.788 233728 DEBUG oslo_concurrency.lockutils [req-999dd2f5-2766-42bc-86b1-e36f7ae1d019 req-20d3ee74-08af-481f-806c-5fcf46c7b3a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.788 233728 DEBUG oslo_concurrency.lockutils [req-999dd2f5-2766-42bc-86b1-e36f7ae1d019 req-20d3ee74-08af-481f-806c-5fcf46c7b3a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.788 233728 DEBUG oslo_concurrency.lockutils [req-999dd2f5-2766-42bc-86b1-e36f7ae1d019 req-20d3ee74-08af-481f-806c-5fcf46c7b3a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.789 233728 DEBUG nova.compute.manager [req-999dd2f5-2766-42bc-86b1-e36f7ae1d019 req-20d3ee74-08af-481f-806c-5fcf46c7b3a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] No waiting events found dispatching network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:39 np0005539552 nova_compute[233724]: 2025-11-29 08:24:39.789 233728 WARNING nova.compute.manager [req-999dd2f5-2766-42bc-86b1-e36f7ae1d019 req-20d3ee74-08af-481f-806c-5fcf46c7b3a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received unexpected event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:24:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:39.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:40 np0005539552 nova_compute[233724]: 2025-11-29 08:24:40.694 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:40 np0005539552 nova_compute[233724]: 2025-11-29 08:24:40.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:40 np0005539552 nova_compute[233724]: 2025-11-29 08:24:40.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:40 np0005539552 nova_compute[233724]: 2025-11-29 08:24:40.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:24:40 np0005539552 nova_compute[233724]: 2025-11-29 08:24:40.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:24:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:40.957 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.095 233728 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.096 233728 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing instance network info cache due to event network-changed-270b7a06-5cdd-4855-a693-0b30baf78df7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.096 233728 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.252 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.253 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.254 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.255 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 07f760bf-6984-45e9-8e85-3d297e812553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.565 233728 DEBUG nova.network.neutron [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.623 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.630 233728 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.631 233728 DEBUG nova.network.neutron [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Refreshing network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.744 233728 DEBUG os_brick.utils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.746 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.758 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.759 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4897f99-d8d6-4473-8c88-f2e8eb0c58ec]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.761 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.770 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.771 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0ab6ef-c7f9-44f5-b38c-05be82c92438]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.773 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.784 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.784 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcdaeee-3b64-4653-8148-1011fef81aef]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.786 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[84040aa1-8f25-4c59-9715-d331f050ed33]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.787 233728 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.817 233728 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.821 233728 DEBUG os_brick.initiator.connectors.lightos [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.821 233728 DEBUG os_brick.initiator.connectors.lightos [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.821 233728 DEBUG os_brick.initiator.connectors.lightos [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:24:41 np0005539552 nova_compute[233724]: 2025-11-29 08:24:41.822 233728 DEBUG os_brick.utils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:24:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:41.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4171420990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:42 np0005539552 podman[288500]: 2025-11-29 08:24:42.9771635 +0000 UTC m=+0.055833633 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:24:42 np0005539552 podman[288499]: 2025-11-29 08:24:42.981871637 +0000 UTC m=+0.060359675 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:24:43 np0005539552 podman[288501]: 2025-11-29 08:24:43.021672588 +0000 UTC m=+0.094314589 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.067 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.070 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.070 233728 INFO nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Creating image(s)#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.071 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.071 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Ensure instance console log exists: /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.071 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.072 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.072 233728 DEBUG oslo_concurrency.lockutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.075 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Start _get_guest_xml network_info=[{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d807030f-7b93-4396-9211-17a740c6b338', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd807030f-7b93-4396-9211-17a740c6b338', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '2e7e8742-c504-412d-82cf-4087bd745c3e', 'attached_at': '2025-11-29T08:24:42.000000', 'detached_at': '', 'volume_id': 'd807030f-7b93-4396-9211-17a740c6b338', 'serial': 'd807030f-7b93-4396-9211-17a740c6b338'}, 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '7c339ab5-eb8a-44ea-b6fd-0808065b0014', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.080 233728 WARNING nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.085 233728 DEBUG nova.virt.libvirt.host [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.087 233728 DEBUG nova.virt.libvirt.host [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.094 233728 DEBUG nova.virt.libvirt.host [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.095 233728 DEBUG nova.virt.libvirt.host [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.096 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.097 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.097 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.097 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.098 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.098 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.098 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.098 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.099 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.099 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.099 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.099 233728 DEBUG nova.virt.hardware [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.100 233728 DEBUG nova.objects.instance [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.154 233728 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.500 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [{"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.536 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-07f760bf-6984-45e9-8e85-3d297e812553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.537 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:24:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1795713750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.608 233728 DEBUG oslo_concurrency.processutils [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.646 233728 DEBUG nova.virt.libvirt.vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.647 233728 DEBUG nova.network.os_vif_util [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.648 233728 DEBUG nova.network.os_vif_util [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.651 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <uuid>2e7e8742-c504-412d-82cf-4087bd745c3e</uuid>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <name>instance-00000082</name>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <memory>196608</memory>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerActionsTestOtherA-server-1414239367</nova:name>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:24:43</nova:creationTime>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.micro">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:memory>192</nova:memory>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <nova:port uuid="270b7a06-5cdd-4855-a693-0b30baf78df7">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <entry name="serial">2e7e8742-c504-412d-82cf-4087bd745c3e</entry>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <entry name="uuid">2e7e8742-c504-412d-82cf-4087bd745c3e</entry>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/2e7e8742-c504-412d-82cf-4087bd745c3e_disk.config">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-d807030f-7b93-4396-9211-17a740c6b338">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <serial>d807030f-7b93-4396-9211-17a740c6b338</serial>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:74:cb:18"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <target dev="tap270b7a06-5c"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e/console.log" append="off"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:24:43 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:24:43 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:24:43 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:24:43 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.658 233728 DEBUG nova.virt.libvirt.vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.659 233728 DEBUG nova.network.os_vif_util [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-644729119-network", "vif_mac": "fa:16:3e:74:cb:18"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.660 233728 DEBUG nova.network.os_vif_util [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.660 233728 DEBUG os_vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.661 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.662 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.662 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.667 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.668 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap270b7a06-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.668 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap270b7a06-5c, col_values=(('external_ids', {'iface-id': '270b7a06-5cdd-4855-a693-0b30baf78df7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:cb:18', 'vm-uuid': '2e7e8742-c504-412d-82cf-4087bd745c3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.670 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539552 NetworkManager[48926]: <info>  [1764404683.6720] manager: (tap270b7a06-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.679 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.680 233728 INFO os_vif [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c')#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.734 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.735 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.735 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:74:cb:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.736 233728 INFO nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Using config drive#033[00m
Nov 29 03:24:43 np0005539552 kernel: tap270b7a06-5c: entered promiscuous mode
Nov 29 03:24:43 np0005539552 NetworkManager[48926]: <info>  [1764404683.8309] manager: (tap270b7a06-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Nov 29 03:24:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:43Z|00573|binding|INFO|Claiming lport 270b7a06-5cdd-4855-a693-0b30baf78df7 for this chassis.
Nov 29 03:24:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:43Z|00574|binding|INFO|270b7a06-5cdd-4855-a693-0b30baf78df7: Claiming fa:16:3e:74:cb:18 10.100.0.14
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.840 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.850 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:cb:18 10.100.0.14'], port_security=['fa:16:3e:74:cb:18 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2e7e8742-c504-412d-82cf-4087bd745c3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7465c0fc-60f6-4695-93cd-f6ab8b97c365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=270b7a06-5cdd-4855-a693-0b30baf78df7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.851 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 270b7a06-5cdd-4855-a693-0b30baf78df7 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.853 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:24:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:43Z|00575|binding|INFO|Setting lport 270b7a06-5cdd-4855-a693-0b30baf78df7 ovn-installed in OVS
Nov 29 03:24:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:43Z|00576|binding|INFO|Setting lport 270b7a06-5cdd-4855-a693-0b30baf78df7 up in Southbound
Nov 29 03:24:43 np0005539552 nova_compute[233724]: 2025-11-29 08:24:43.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:43 np0005539552 systemd-udevd[288631]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.881 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[63225fe4-a467-442c-a984-5c4387c9b1e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:43 np0005539552 systemd-machined[196379]: New machine qemu-57-instance-00000082.
Nov 29 03:24:43 np0005539552 NetworkManager[48926]: <info>  [1764404683.8951] device (tap270b7a06-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:43 np0005539552 NetworkManager[48926]: <info>  [1764404683.8959] device (tap270b7a06-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:43 np0005539552 systemd[1]: Started Virtual Machine qemu-57-instance-00000082.
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.914 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6107c49b-046f-45e2-907a-988d98887079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.919 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[18f145d7-3ee2-4f39-a92d-a3abb3eb7eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.949 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1fffb3fa-7f99-4a8c-b0b0-77aba864d453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.984 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[60cdd069-eace-4b35-a91d-cb24c4f736ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 35298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288644, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:43.999 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5163914b-0815-4a67-82d1-bc0fc06c2776]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747491, 'tstamp': 747491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288646, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747494, 'tstamp': 747494}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288646, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.001 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.004 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.004 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.005 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.005 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.005 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.389 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404684.3888314, 2e7e8742-c504-412d-82cf-4087bd745c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.390 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.392 233728 DEBUG nova.compute.manager [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.396 233728 INFO nova.virt.libvirt.driver [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance running successfully.#033[00m
Nov 29 03:24:44 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.400 233728 DEBUG nova.virt.libvirt.guest [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.400 233728 DEBUG nova.virt.libvirt.driver [None req-f8a9c80f-6700-4260-921a-7b31319b47c2 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.416 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.424 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.463 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.464 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404684.3904881, 2e7e8742-c504-412d-82cf-4087bd745c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.464 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.506 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.511 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.537 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:24:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.904 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.905 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.905 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.905 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.905 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.907 233728 INFO nova.compute.manager [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Terminating instance#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.907 233728 DEBUG nova.compute.manager [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:24:44 np0005539552 kernel: tapd090c296-9e (unregistering): left promiscuous mode
Nov 29 03:24:44 np0005539552 NetworkManager[48926]: <info>  [1764404684.9556] device (tapd090c296-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:24:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:44Z|00577|binding|INFO|Releasing lport d090c296-9e92-42d2-9d11-5d9eb058d748 from this chassis (sb_readonly=0)
Nov 29 03:24:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:44Z|00578|binding|INFO|Setting lport d090c296-9e92-42d2-9d11-5d9eb058d748 down in Southbound
Nov 29 03:24:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:44Z|00579|binding|INFO|Removing iface tapd090c296-9e ovn-installed in OVS
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.975 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:2f:8f 10.100.0.10'], port_security=['fa:16:3e:0b:2f:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4cfb409d-50a4-4256-83f3-6aef192ab489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '252fdae97c7e4dec97ac9a23d72747cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec5ce994-6de9-4b70-abd4-89e60c4a4274', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dfbacd5-9971-4e30-ae61-a099b062f3b5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=d090c296-9e92-42d2-9d11-5d9eb058d748) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.977 143400 INFO neutron.agent.ovn.metadata.agent [-] Port d090c296-9e92-42d2-9d11-5d9eb058d748 in datapath c2c49bb3-4207-40c0-a26d-1cede90f8b00 unbound from our chassis#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.978 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2c49bb3-4207-40c0-a26d-1cede90f8b00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.979 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34f5c7ca-ba50-4067-a1cd-6352b2fd0a86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:44.980 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00 namespace which is not needed anymore#033[00m
Nov 29 03:24:44 np0005539552 nova_compute[233724]: 2025-11-29 08:24:44.982 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539552 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 29 03:24:45 np0005539552 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000083.scope: Consumed 7.852s CPU time.
Nov 29 03:24:45 np0005539552 systemd-machined[196379]: Machine qemu-56-instance-00000083 terminated.
Nov 29 03:24:45 np0005539552 NetworkManager[48926]: <info>  [1764404685.1269] manager: (tapd090c296-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 29 03:24:45 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [NOTICE]   (288426) : haproxy version is 2.8.14-c23fe91
Nov 29 03:24:45 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [NOTICE]   (288426) : path to executable is /usr/sbin/haproxy
Nov 29 03:24:45 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [WARNING]  (288426) : Exiting Master process...
Nov 29 03:24:45 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [ALERT]    (288426) : Current worker (288428) exited with code 143 (Terminated)
Nov 29 03:24:45 np0005539552 neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00[288422]: [WARNING]  (288426) : All workers exited. Exiting... (0)
Nov 29 03:24:45 np0005539552 systemd[1]: libpod-0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a.scope: Deactivated successfully.
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.143 233728 INFO nova.virt.libvirt.driver [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Instance destroyed successfully.#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.144 233728 DEBUG nova.objects.instance [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lazy-loading 'resources' on Instance uuid 4cfb409d-50a4-4256-83f3-6aef192ab489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.148 233728 DEBUG nova.network.neutron [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updated VIF entry in instance network info cache for port 270b7a06-5cdd-4855-a693-0b30baf78df7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.148 233728 DEBUG nova.network.neutron [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [{"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:45 np0005539552 podman[288711]: 2025-11-29 08:24:45.15104547 +0000 UTC m=+0.058675310 container died 0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.176 233728 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2e7e8742-c504-412d-82cf-4087bd745c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.177 233728 DEBUG nova.virt.libvirt.vif [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-938624690',display_name='tempest-ServerMetadataNegativeTestJSON-server-938624690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-938624690',id=131,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='252fdae97c7e4dec97ac9a23d72747cc',ramdisk_id='',reservation_id='r-2jhpkygu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1540077054',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1540077054-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:38Z,user_data=None,user_id='d49b9d546af249fc80268c1bdd5884f6',uuid=4cfb409d-50a4-4256-83f3-6aef192ab489,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.178 233728 DEBUG nova.network.os_vif_util [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Converting VIF {"id": "d090c296-9e92-42d2-9d11-5d9eb058d748", "address": "fa:16:3e:0b:2f:8f", "network": {"id": "c2c49bb3-4207-40c0-a26d-1cede90f8b00", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1464114744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "252fdae97c7e4dec97ac9a23d72747cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd090c296-9e", "ovs_interfaceid": "d090c296-9e92-42d2-9d11-5d9eb058d748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.178 233728 DEBUG nova.network.os_vif_util [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.179 233728 DEBUG os_vif [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.180 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.181 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd090c296-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.182 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539552 systemd[1]: var-lib-containers-storage-overlay-182bd6212002960d3126f362db40d7dee52374d690a4677a56bd81a1716c0ab9-merged.mount: Deactivated successfully.
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.184 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.186 233728 INFO os_vif [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:2f:8f,bridge_name='br-int',has_traffic_filtering=True,id=d090c296-9e92-42d2-9d11-5d9eb058d748,network=Network(c2c49bb3-4207-40c0-a26d-1cede90f8b00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd090c296-9e')#033[00m
Nov 29 03:24:45 np0005539552 podman[288711]: 2025-11-29 08:24:45.194028216 +0000 UTC m=+0.101658036 container cleanup 0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:45 np0005539552 systemd[1]: libpod-conmon-0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a.scope: Deactivated successfully.
Nov 29 03:24:45 np0005539552 podman[288762]: 2025-11-29 08:24:45.255350555 +0000 UTC m=+0.041401934 container remove 0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.264 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[51ce73e6-5848-484a-b043-2b2ab1582fcc]: (4, ('Sat Nov 29 08:24:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00 (0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a)\n0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a\nSat Nov 29 08:24:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00 (0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a)\n0ecbc241975e4248a51836be4dc215f6b9d26c5b39aedb69ce04536fb4c6946a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.266 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e937caa7-c639-4ec6-9ca4-3893753ffd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.267 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c49bb3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:45 np0005539552 kernel: tapc2c49bb3-40: left promiscuous mode
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.273 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[517d8c8f-d057-4581-a059-61e69471bf6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.269 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.285 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8f166f3e-5e43-41be-8d27-26be27293f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.286 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[677fc2d8-5244-4bc3-aebf-e620dec31e03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.288 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.296 233728 DEBUG nova.compute.manager [req-63d259b7-a025-401a-a2b8-0a6d392f2c12 req-6d2a70ec-3ca8-4e33-80d9-4c48bfc9b8f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-vif-unplugged-d090c296-9e92-42d2-9d11-5d9eb058d748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.296 233728 DEBUG oslo_concurrency.lockutils [req-63d259b7-a025-401a-a2b8-0a6d392f2c12 req-6d2a70ec-3ca8-4e33-80d9-4c48bfc9b8f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.299 233728 DEBUG oslo_concurrency.lockutils [req-63d259b7-a025-401a-a2b8-0a6d392f2c12 req-6d2a70ec-3ca8-4e33-80d9-4c48bfc9b8f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.299 233728 DEBUG oslo_concurrency.lockutils [req-63d259b7-a025-401a-a2b8-0a6d392f2c12 req-6d2a70ec-3ca8-4e33-80d9-4c48bfc9b8f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.300 233728 DEBUG nova.compute.manager [req-63d259b7-a025-401a-a2b8-0a6d392f2c12 req-6d2a70ec-3ca8-4e33-80d9-4c48bfc9b8f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] No waiting events found dispatching network-vif-unplugged-d090c296-9e92-42d2-9d11-5d9eb058d748 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.300 233728 DEBUG nova.compute.manager [req-63d259b7-a025-401a-a2b8-0a6d392f2c12 req-6d2a70ec-3ca8-4e33-80d9-4c48bfc9b8f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-vif-unplugged-d090c296-9e92-42d2-9d11-5d9eb058d748 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.302 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4e34fdd7-9b32-410c-9313-30a3f2cafb76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772291, 'reachable_time': 19851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288785, 'error': None, 'target': 'ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.304 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2c49bb3-4207-40c0-a26d-1cede90f8b00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:24:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:24:45.305 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[3e19af49-307e-4be1-acdd-84b544587d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:45 np0005539552 systemd[1]: run-netns-ovnmeta\x2dc2c49bb3\x2d4207\x2d40c0\x2da26d\x2d1cede90f8b00.mount: Deactivated successfully.
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.580 233728 INFO nova.virt.libvirt.driver [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Deleting instance files /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489_del#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.581 233728 INFO nova.virt.libvirt.driver [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Deletion of /var/lib/nova/instances/4cfb409d-50a4-4256-83f3-6aef192ab489_del complete#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.644 233728 INFO nova.compute.manager [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.644 233728 DEBUG oslo.service.loopingcall [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.645 233728 DEBUG nova.compute.manager [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.645 233728 DEBUG nova.network.neutron [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.705 233728 DEBUG nova.compute.manager [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.705 233728 DEBUG oslo_concurrency.lockutils [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.705 233728 DEBUG oslo_concurrency.lockutils [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.706 233728 DEBUG oslo_concurrency.lockutils [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.706 233728 DEBUG nova.compute.manager [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:45 np0005539552 nova_compute[233724]: 2025-11-29 08:24:45.706 233728 WARNING nova.compute.manager [req-23b53c94-0d25-4518-ab99-cb4f6ef5848a req-1f36678e-ff8b-438f-8c76-96e3bc9113be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:24:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:46 np0005539552 nova_compute[233724]: 2025-11-29 08:24:46.501 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:46 np0005539552 nova_compute[233724]: 2025-11-29 08:24:46.623 233728 DEBUG nova.network.neutron [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:46 np0005539552 nova_compute[233724]: 2025-11-29 08:24:46.695 233728 INFO nova.compute.manager [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Took 1.05 seconds to deallocate network for instance.#033[00m
Nov 29 03:24:46 np0005539552 nova_compute[233724]: 2025-11-29 08:24:46.799 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:46 np0005539552 nova_compute[233724]: 2025-11-29 08:24:46.800 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:46 np0005539552 nova_compute[233724]: 2025-11-29 08:24:46.898 233728 DEBUG oslo_concurrency.processutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/528307141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.360 233728 DEBUG oslo_concurrency.processutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.368 233728 DEBUG nova.compute.provider_tree [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.394 233728 DEBUG nova.scheduler.client.report [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.444 233728 DEBUG nova.compute.manager [req-408ac8c9-cc62-4db1-b13b-d88e24fabae4 req-4492672d-abf4-44b3-8575-ab41775b06d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.445 233728 DEBUG oslo_concurrency.lockutils [req-408ac8c9-cc62-4db1-b13b-d88e24fabae4 req-4492672d-abf4-44b3-8575-ab41775b06d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.445 233728 DEBUG oslo_concurrency.lockutils [req-408ac8c9-cc62-4db1-b13b-d88e24fabae4 req-4492672d-abf4-44b3-8575-ab41775b06d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.445 233728 DEBUG oslo_concurrency.lockutils [req-408ac8c9-cc62-4db1-b13b-d88e24fabae4 req-4492672d-abf4-44b3-8575-ab41775b06d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.446 233728 DEBUG nova.compute.manager [req-408ac8c9-cc62-4db1-b13b-d88e24fabae4 req-4492672d-abf4-44b3-8575-ab41775b06d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] No waiting events found dispatching network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.446 233728 WARNING nova.compute.manager [req-408ac8c9-cc62-4db1-b13b-d88e24fabae4 req-4492672d-abf4-44b3-8575-ab41775b06d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received unexpected event network-vif-plugged-d090c296-9e92-42d2-9d11-5d9eb058d748 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.447 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.490 233728 INFO nova.scheduler.client.report [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Deleted allocations for instance 4cfb409d-50a4-4256-83f3-6aef192ab489#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.618 233728 DEBUG oslo_concurrency.lockutils [None req-00ecf5d2-8b23-4642-b169-874e3b4d1a64 d49b9d546af249fc80268c1bdd5884f6 252fdae97c7e4dec97ac9a23d72747cc - - default default] Lock "4cfb409d-50a4-4256-83f3-6aef192ab489" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.808 233728 DEBUG nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.809 233728 DEBUG oslo_concurrency.lockutils [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.810 233728 DEBUG oslo_concurrency.lockutils [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.810 233728 DEBUG oslo_concurrency.lockutils [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.810 233728 DEBUG nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.811 233728 WARNING nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:24:47 np0005539552 nova_compute[233724]: 2025-11-29 08:24:47.811 233728 DEBUG nova.compute.manager [req-0908d56e-8428-4596-993f-c15e54eb6f23 req-b41b0332-754f-4317-b785-0806544d9ee3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Received event network-vif-deleted-d090c296-9e92-42d2-9d11-5d9eb058d748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:47.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:49 np0005539552 nova_compute[233724]: 2025-11-29 08:24:49.533 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:49.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:50 np0005539552 nova_compute[233724]: 2025-11-29 08:24:50.183 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:50.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:51 np0005539552 nova_compute[233724]: 2025-11-29 08:24:51.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:51.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:53Z|00580|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:24:54 np0005539552 nova_compute[233724]: 2025-11-29 08:24:54.067 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.368373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694368458, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 871, "num_deletes": 252, "total_data_size": 1606033, "memory_usage": 1623440, "flush_reason": "Manual Compaction"}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694376888, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 1058759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51055, "largest_seqno": 51920, "table_properties": {"data_size": 1054685, "index_size": 1790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9687, "raw_average_key_size": 20, "raw_value_size": 1046270, "raw_average_value_size": 2170, "num_data_blocks": 78, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404632, "oldest_key_time": 1764404632, "file_creation_time": 1764404694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 8544 microseconds, and 4311 cpu microseconds.
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.376927) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 1058759 bytes OK
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.376944) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.378543) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.378564) EVENT_LOG_v1 {"time_micros": 1764404694378558, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.378583) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1601532, prev total WAL file size 1601532, number of live WAL files 2.
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.379246) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(1033KB)], [99(11MB)]
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694379275, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13275104, "oldest_snapshot_seqno": -1}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 8181 keys, 11322583 bytes, temperature: kUnknown
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694458959, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11322583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11269209, "index_size": 31822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 213605, "raw_average_key_size": 26, "raw_value_size": 11124362, "raw_average_value_size": 1359, "num_data_blocks": 1239, "num_entries": 8181, "num_filter_entries": 8181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.459230) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11322583 bytes
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.461089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.4 rd, 141.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 8701, records dropped: 520 output_compression: NoCompression
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.461109) EVENT_LOG_v1 {"time_micros": 1764404694461100, "job": 62, "event": "compaction_finished", "compaction_time_micros": 79774, "compaction_time_cpu_micros": 26855, "output_level": 6, "num_output_files": 1, "total_output_size": 11322583, "num_input_records": 8701, "num_output_records": 8181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694461412, "job": 62, "event": "table_file_deletion", "file_number": 101}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694463566, "job": 62, "event": "table_file_deletion", "file_number": 99}
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.379173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.463664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.463670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.463671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.463673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:24:54.463674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539552 nova_compute[233724]: 2025-11-29 08:24:54.804 233728 INFO nova.compute.manager [None req-168d2678-7c6d-4390-9c5b-2b528b372731 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Get console output#033[00m
Nov 29 03:24:54 np0005539552 nova_compute[233724]: 2025-11-29 08:24:54.810 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:24:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:54.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:54 np0005539552 nova_compute[233724]: 2025-11-29 08:24:54.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:55 np0005539552 nova_compute[233724]: 2025-11-29 08:24:55.184 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:55.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:56 np0005539552 nova_compute[233724]: 2025-11-29 08:24:56.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:56.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:56 np0005539552 nova_compute[233724]: 2025-11-29 08:24:56.941 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:56 np0005539552 nova_compute[233724]: 2025-11-29 08:24:56.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:24:56 np0005539552 nova_compute[233724]: 2025-11-29 08:24:56.955 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:24:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:24:58Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:cb:18 10.100.0.14
Nov 29 03:24:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:24:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:00 np0005539552 nova_compute[233724]: 2025-11-29 08:25:00.143 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404685.1408153, 4cfb409d-50a4-4256-83f3-6aef192ab489 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:00 np0005539552 nova_compute[233724]: 2025-11-29 08:25:00.144 233728 INFO nova.compute.manager [-] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:25:00 np0005539552 nova_compute[233724]: 2025-11-29 08:25:00.161 233728 DEBUG nova.compute.manager [None req-02e7bb91-97ad-4517-b39f-48e4ea6033e6 - - - - - -] [instance: 4cfb409d-50a4-4256-83f3-6aef192ab489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:00 np0005539552 nova_compute[233724]: 2025-11-29 08:25:00.186 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 03:25:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:01 np0005539552 nova_compute[233724]: 2025-11-29 08:25:01.508 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:02 np0005539552 nova_compute[233724]: 2025-11-29 08:25:02.790 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 03:25:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 03:25:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:05 np0005539552 nova_compute[233724]: 2025-11-29 08:25:05.189 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:05 np0005539552 nova_compute[233724]: 2025-11-29 08:25:05.419 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:06 np0005539552 nova_compute[233724]: 2025-11-29 08:25:06.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:06.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:25:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9992 writes, 52K keys, 9992 commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 9992 writes, 9992 syncs, 1.00 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1658 writes, 8253 keys, 1658 commit groups, 1.0 writes per commit group, ingest: 16.32 MB, 0.03 MB/s#012Interval WAL: 1658 writes, 1658 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     21.2      3.04              0.19        31    0.098       0      0       0.0       0.0#012  L6      1/0   10.80 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9     88.5     75.6      4.14              0.81        30    0.138    198K    16K       0.0       0.0#012 Sum      1/0   10.80 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     51.1     52.6      7.18              1.00        61    0.118    198K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6     83.0     84.1      0.95              0.22        12    0.079     51K   3143       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     88.5     75.6      4.14              0.81        30    0.138    198K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     21.5      2.99              0.19        30    0.100       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.1 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.09 MB/s write, 0.36 GB read, 0.09 MB/s read, 7.2 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 39.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000282 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2244,38.32 MB,12.6049%) FilterBlock(61,557.30 KB,0.179025%) IndexBlock(61,965.31 KB,0.310095%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:25:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:08.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.866 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.867 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.886 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:25:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.956 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.957 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.965 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:25:09 np0005539552 nova_compute[233724]: 2025-11-29 08:25:09.965 233728 INFO nova.compute.claims [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.093 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.191 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3976358579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.528 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.535 233728 DEBUG nova.compute.provider_tree [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.559 233728 DEBUG nova.scheduler.client.report [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.582 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.583 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.637 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.637 233728 DEBUG nova.network.neutron [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.672 233728 INFO nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.691 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.810 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.811 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.812 233728 INFO nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Creating image(s)#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.841 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:10.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.870 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.896 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.900 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.932 233728 DEBUG nova.policy [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5346d862b0f4465aa9162f206696903', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd5ce4a2eb794cdd850dd88487f89b9a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.977 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.978 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.979 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:10 np0005539552 nova_compute[233724]: 2025-11-29 08:25:10.979 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.008 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.012 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 faaf0451-7134-4014-aa11-8019f10ffe8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.291 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 faaf0451-7134-4014-aa11-8019f10ffe8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.368 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] resizing rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.463 233728 DEBUG nova.objects.instance [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'migration_context' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.479 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.480 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Ensure instance console log exists: /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.480 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.480 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.481 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.513 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.620 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.620 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.621 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.621 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.622 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.623 233728 INFO nova.compute.manager [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Terminating instance#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.624 233728 DEBUG nova.compute.manager [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:25:11 np0005539552 kernel: tap270b7a06-5c (unregistering): left promiscuous mode
Nov 29 03:25:11 np0005539552 NetworkManager[48926]: <info>  [1764404711.6776] device (tap270b7a06-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.691 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:11Z|00581|binding|INFO|Releasing lport 270b7a06-5cdd-4855-a693-0b30baf78df7 from this chassis (sb_readonly=0)
Nov 29 03:25:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:11Z|00582|binding|INFO|Setting lport 270b7a06-5cdd-4855-a693-0b30baf78df7 down in Southbound
Nov 29 03:25:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:11Z|00583|binding|INFO|Removing iface tap270b7a06-5c ovn-installed in OVS
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.696 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.704 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:cb:18 10.100.0.14'], port_security=['fa:16:3e:74:cb:18 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2e7e8742-c504-412d-82cf-4087bd745c3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7465c0fc-60f6-4695-93cd-f6ab8b97c365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=270b7a06-5cdd-4855-a693-0b30baf78df7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.707 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 270b7a06-5cdd-4855-a693-0b30baf78df7 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.711 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.712 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.731 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcebe5d-5d6c-47e8-bb50-f415c14b6c37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:11 np0005539552 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 29 03:25:11 np0005539552 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000082.scope: Consumed 14.267s CPU time.
Nov 29 03:25:11 np0005539552 systemd-machined[196379]: Machine qemu-57-instance-00000082 terminated.
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.765 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[18ff08fa-602e-4ac3-9be6-364ddd45b163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.769 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1843b996-1317-4a44-8a0c-4f849efdb411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.801 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b617e897-c484-43c2-bd5d-5b4b82b30d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.819 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34b2e68f-7588-4268-8ad9-567374c68a23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747481, 'reachable_time': 37336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289209, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.839 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca739d2-d399-4314-a733-87424f221a69]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747491, 'tstamp': 747491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289210, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7fc1dfc3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747494, 'tstamp': 747494}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289210, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.841 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.854 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.855 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.855 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.856 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:11.856 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.865 233728 INFO nova.virt.libvirt.driver [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Instance destroyed successfully.#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.866 233728 DEBUG nova.objects.instance [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 2e7e8742-c504-412d-82cf-4087bd745c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.899 233728 DEBUG nova.virt.libvirt.vif [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1414239367',display_name='tempest-ServerActionsTestOtherA-server-1414239367',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1414239367',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-sj7sy73r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=2e7e8742-c504-412d-82cf-4087bd745c3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.899 233728 DEBUG nova.network.os_vif_util [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "270b7a06-5cdd-4855-a693-0b30baf78df7", "address": "fa:16:3e:74:cb:18", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap270b7a06-5c", "ovs_interfaceid": "270b7a06-5cdd-4855-a693-0b30baf78df7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.900 233728 DEBUG nova.network.os_vif_util [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.901 233728 DEBUG os_vif [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.902 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.903 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap270b7a06-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.904 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.906 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:11.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.908 233728 INFO os_vif [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:cb:18,bridge_name='br-int',has_traffic_filtering=True,id=270b7a06-5cdd-4855-a693-0b30baf78df7,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap270b7a06-5c')#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.995 233728 DEBUG nova.compute.manager [req-918f379c-bbef-4899-ad19-fcbee6aa6227 req-f7a22ff8-9d46-4867-900e-9b9afa3c04f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.996 233728 DEBUG oslo_concurrency.lockutils [req-918f379c-bbef-4899-ad19-fcbee6aa6227 req-f7a22ff8-9d46-4867-900e-9b9afa3c04f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.996 233728 DEBUG oslo_concurrency.lockutils [req-918f379c-bbef-4899-ad19-fcbee6aa6227 req-f7a22ff8-9d46-4867-900e-9b9afa3c04f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.996 233728 DEBUG oslo_concurrency.lockutils [req-918f379c-bbef-4899-ad19-fcbee6aa6227 req-f7a22ff8-9d46-4867-900e-9b9afa3c04f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.997 233728 DEBUG nova.compute.manager [req-918f379c-bbef-4899-ad19-fcbee6aa6227 req-f7a22ff8-9d46-4867-900e-9b9afa3c04f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:11 np0005539552 nova_compute[233724]: 2025-11-29 08:25:11.997 233728 DEBUG nova.compute.manager [req-918f379c-bbef-4899-ad19-fcbee6aa6227 req-f7a22ff8-9d46-4867-900e-9b9afa3c04f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-unplugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.135 233728 INFO nova.virt.libvirt.driver [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Deleting instance files /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e_del#033[00m
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.136 233728 INFO nova.virt.libvirt.driver [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Deletion of /var/lib/nova/instances/2e7e8742-c504-412d-82cf-4087bd745c3e_del complete#033[00m
Nov 29 03:25:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.195 233728 INFO nova.compute.manager [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.196 233728 DEBUG oslo.service.loopingcall [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.196 233728 DEBUG nova.compute.manager [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.196 233728 DEBUG nova.network.neutron [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:25:12 np0005539552 nova_compute[233724]: 2025-11-29 08:25:12.597 233728 DEBUG nova.network.neutron [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Successfully created port: f81081eb-fee3-4706-a6df-e1400380a3be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:25:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:12.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.504 233728 DEBUG nova.network.neutron [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Successfully updated port: f81081eb-fee3-4706-a6df-e1400380a3be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.529 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.529 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.530 233728 DEBUG nova.network.neutron [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.592 233728 DEBUG nova.compute.manager [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.592 233728 DEBUG nova.compute.manager [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing instance network info cache due to event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.593 233728 DEBUG oslo_concurrency.lockutils [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.692 233728 DEBUG nova.network.neutron [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.792 233728 DEBUG nova.network.neutron [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:13 np0005539552 nova_compute[233724]: 2025-11-29 08:25:13.812 233728 INFO nova.compute.manager [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Took 1.62 seconds to deallocate network for instance.#033[00m
Nov 29 03:25:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:13.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:13 np0005539552 podman[289243]: 2025-11-29 08:25:13.990260285 +0000 UTC m=+0.065750230 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:25:14 np0005539552 podman[289244]: 2025-11-29 08:25:14.000846039 +0000 UTC m=+0.071583037 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:25:14 np0005539552 podman[289245]: 2025-11-29 08:25:14.043470316 +0000 UTC m=+0.118711395 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.128 233728 DEBUG nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.128 233728 DEBUG oslo_concurrency.lockutils [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.129 233728 DEBUG oslo_concurrency.lockutils [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.129 233728 DEBUG oslo_concurrency.lockutils [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.129 233728 DEBUG nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] No waiting events found dispatching network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.129 233728 WARNING nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received unexpected event network-vif-plugged-270b7a06-5cdd-4855-a693-0b30baf78df7 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.130 233728 DEBUG nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Received event network-vif-deleted-270b7a06-5cdd-4855-a693-0b30baf78df7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.202 233728 INFO nova.compute.manager [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Took 0.39 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.203 233728 DEBUG nova.compute.manager [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Deleting volume: d807030f-7b93-4396-9211-17a740c6b338 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.449 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.450 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.455 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.481 233728 INFO nova.scheduler.client.report [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Deleted allocations for instance 2e7e8742-c504-412d-82cf-4087bd745c3e#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.558 233728 DEBUG oslo_concurrency.lockutils [None req-0627c272-9bd1-409f-8a5c-b5dd71ab0338 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "2e7e8742-c504-412d-82cf-4087bd745c3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:14.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.874 233728 DEBUG nova.network.neutron [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.894 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.894 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance network_info: |[{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.894 233728 DEBUG oslo_concurrency.lockutils [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.895 233728 DEBUG nova.network.neutron [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.897 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Start _get_guest_xml network_info=[{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.904 233728 WARNING nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.907 233728 DEBUG nova.virt.libvirt.host [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.908 233728 DEBUG nova.virt.libvirt.host [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.912 233728 DEBUG nova.virt.libvirt.host [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.913 233728 DEBUG nova.virt.libvirt.host [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.914 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.914 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.914 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.914 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.914 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.915 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.915 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.915 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.915 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.916 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.916 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.916 233728 DEBUG nova.virt.hardware [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:14 np0005539552 nova_compute[233724]: 2025-11-29 08:25:14.918 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352090823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:25:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352090823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:25:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/344247757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.384 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.415 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.419 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2114982865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.864 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.868 233728 DEBUG nova.virt.libvirt.vif [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-157621080',display_name='tempest-ServerRescueTestJSONUnderV235-server-157621080',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-157621080',id=134,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd5ce4a2eb794cdd850dd88487f89b9a',ramdisk_id='',reservation_id='r-b4txo28s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1385777035',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1385777035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:10Z,user_data=None,user_id='e5346d862b0f4465aa9162f206696903',uuid=faaf0451-7134-4014-aa11-8019f10ffe8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.869 233728 DEBUG nova.network.os_vif_util [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converting VIF {"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.871 233728 DEBUG nova.network.os_vif_util [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.873 233728 DEBUG nova.objects.instance [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'pci_devices' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.894 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <uuid>faaf0451-7134-4014-aa11-8019f10ffe8a</uuid>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <name>instance-00000086</name>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-157621080</nova:name>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:25:14</nova:creationTime>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:user uuid="e5346d862b0f4465aa9162f206696903">tempest-ServerRescueTestJSONUnderV235-1385777035-project-member</nova:user>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:project uuid="dd5ce4a2eb794cdd850dd88487f89b9a">tempest-ServerRescueTestJSONUnderV235-1385777035</nova:project>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <nova:port uuid="f81081eb-fee3-4706-a6df-e1400380a3be">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <entry name="serial">faaf0451-7134-4014-aa11-8019f10ffe8a</entry>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <entry name="uuid">faaf0451-7134-4014-aa11-8019f10ffe8a</entry>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/faaf0451-7134-4014-aa11-8019f10ffe8a_disk">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:5e:c5:a7"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <target dev="tapf81081eb-fe"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/console.log" append="off"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:25:15 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:25:15 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:25:15 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:25:15 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.896 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Preparing to wait for external event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.896 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.897 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.897 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.898 233728 DEBUG nova.virt.libvirt.vif [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-157621080',display_name='tempest-ServerRescueTestJSONUnderV235-server-157621080',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-157621080',id=134,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd5ce4a2eb794cdd850dd88487f89b9a',ramdisk_id='',reservation_id='r-b4txo28s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1385777035',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1385777035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:10Z,user_data=None,user_id='e5346d862b0f4465aa9162f206696903',uuid=faaf0451-7134-4014-aa11-8019f10ffe8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.898 233728 DEBUG nova.network.os_vif_util [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converting VIF {"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.898 233728 DEBUG nova.network.os_vif_util [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.899 233728 DEBUG os_vif [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.899 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.900 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.900 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.903 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.903 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf81081eb-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.904 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf81081eb-fe, col_values=(('external_ids', {'iface-id': 'f81081eb-fee3-4706-a6df-e1400380a3be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:c5:a7', 'vm-uuid': 'faaf0451-7134-4014-aa11-8019f10ffe8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.907 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:15 np0005539552 NetworkManager[48926]: <info>  [1764404715.9081] manager: (tapf81081eb-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.911 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:15.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.912 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.913 233728 INFO os_vif [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe')#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.968 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.968 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.968 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No VIF found with MAC fa:16:3e:5e:c5:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.969 233728 INFO nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Using config drive#033[00m
Nov 29 03:25:15 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:25:15 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:25:15 np0005539552 nova_compute[233724]: 2025-11-29 08:25:15.996 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.425 233728 INFO nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Creating config drive at /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.435 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj3f_6q81 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.470 233728 DEBUG nova.network.neutron [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updated VIF entry in instance network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.471 233728 DEBUG nova.network.neutron [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.486 233728 DEBUG oslo_concurrency.lockutils [req-255d8169-ea2c-4f61-a111-32cb59e761fd req-8ad0670f-a177-4928-ac2e-05549dabea61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.577 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj3f_6q81" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.609 233728 DEBUG nova.storage.rbd_utils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.613 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.804 233728 DEBUG oslo_concurrency.processutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.805 233728 INFO nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Deleting local config drive /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:25:16 np0005539552 kernel: tapf81081eb-fe: entered promiscuous mode
Nov 29 03:25:16 np0005539552 NetworkManager[48926]: <info>  [1764404716.8620] manager: (tapf81081eb-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Nov 29 03:25:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:16Z|00584|binding|INFO|Claiming lport f81081eb-fee3-4706-a6df-e1400380a3be for this chassis.
Nov 29 03:25:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:16Z|00585|binding|INFO|f81081eb-fee3-4706-a6df-e1400380a3be: Claiming fa:16:3e:5e:c5:a7 10.100.0.8
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.864 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:16.869 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:c5:a7 10.100.0.8'], port_security=['fa:16:3e:5e:c5:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'faaf0451-7134-4014-aa11-8019f10ffe8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3bd4137-d392-4145-85ca-267270babe0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd5ce4a2eb794cdd850dd88487f89b9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89135a4d-577c-4413-84fa-0487b26f7a91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40bb599-5a01-464e-b600-44bfc4dd511c, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f81081eb-fee3-4706-a6df-e1400380a3be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:16.870 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f81081eb-fee3-4706-a6df-e1400380a3be in datapath f3bd4137-d392-4145-85ca-267270babe0f bound to our chassis#033[00m
Nov 29 03:25:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:16.871 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f3bd4137-d392-4145-85ca-267270babe0f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:25:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:16.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:16.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8dfe49-d874-4a5a-9e65-4d1f7ef2cebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:16Z|00586|binding|INFO|Setting lport f81081eb-fee3-4706-a6df-e1400380a3be up in Southbound
Nov 29 03:25:16 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:16Z|00587|binding|INFO|Setting lport f81081eb-fee3-4706-a6df-e1400380a3be ovn-installed in OVS
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539552 nova_compute[233724]: 2025-11-29 08:25:16.887 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539552 systemd-machined[196379]: New machine qemu-58-instance-00000086.
Nov 29 03:25:16 np0005539552 systemd-udevd[289448]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:16 np0005539552 NetworkManager[48926]: <info>  [1764404716.9143] device (tapf81081eb-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:16 np0005539552 NetworkManager[48926]: <info>  [1764404716.9152] device (tapf81081eb-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:16 np0005539552 systemd[1]: Started Virtual Machine qemu-58-instance-00000086.
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.498 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404717.498396, faaf0451-7134-4014-aa11-8019f10ffe8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.499 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.523 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.528 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404717.4994662, faaf0451-7134-4014-aa11-8019f10ffe8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.529 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.549 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.553 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:17 np0005539552 nova_compute[233724]: 2025-11-29 08:25:17.571 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.593 233728 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.594 233728 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.594 233728 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.595 233728 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.596 233728 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Processing event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.596 233728 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.597 233728 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.599 233728 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.610 233728 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.611 233728 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.611 233728 WARNING nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received unexpected event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.612 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.616 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404718.6163056, faaf0451-7134-4014-aa11-8019f10ffe8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.616 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.618 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.621 233728 INFO nova.virt.libvirt.driver [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance spawned successfully.#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.621 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.640 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.646 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.649 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.649 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.650 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.650 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.651 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.651 233728 DEBUG nova.virt.libvirt.driver [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.677 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.743 233728 INFO nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Took 7.93 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.743 233728 DEBUG nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.825 233728 INFO nova.compute.manager [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Took 8.90 seconds to build instance.#033[00m
Nov 29 03:25:18 np0005539552 nova_compute[233724]: 2025-11-29 08:25:18.843 233728 DEBUG oslo_concurrency.lockutils [None req-5efa1dc2-c3bf-4fcd-b67e-fd041e7eb114 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:18.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:25:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132615839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:25:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:25:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132615839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:25:19 np0005539552 nova_compute[233724]: 2025-11-29 08:25:19.738 233728 INFO nova.compute.manager [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Rescuing#033[00m
Nov 29 03:25:19 np0005539552 nova_compute[233724]: 2025-11-29 08:25:19.739 233728 DEBUG oslo_concurrency.lockutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:19 np0005539552 nova_compute[233724]: 2025-11-29 08:25:19.739 233728 DEBUG oslo_concurrency.lockutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:19 np0005539552 nova_compute[233724]: 2025-11-29 08:25:19.739 233728 DEBUG nova.network.neutron [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.202 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.202 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.203 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.203 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.203 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.204 233728 INFO nova.compute.manager [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Terminating instance#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.205 233728 DEBUG nova.compute.manager [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:25:20 np0005539552 kernel: tap5511e511-23 (unregistering): left promiscuous mode
Nov 29 03:25:20 np0005539552 NetworkManager[48926]: <info>  [1764404720.2672] device (tap5511e511-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.276 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:20Z|00588|binding|INFO|Releasing lport 5511e511-2310-4811-8313-3722fcf49758 from this chassis (sb_readonly=0)
Nov 29 03:25:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:20Z|00589|binding|INFO|Setting lport 5511e511-2310-4811-8313-3722fcf49758 down in Southbound
Nov 29 03:25:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:20Z|00590|binding|INFO|Removing iface tap5511e511-23 ovn-installed in OVS
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.279 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.282 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:b9:47 10.100.0.8'], port_security=['fa:16:3e:f3:b9:47 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07f760bf-6984-45e9-8e85-3d297e812553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7465c0fc-60f6-4695-93cd-f6ab8b97c365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5511e511-2310-4811-8313-3722fcf49758) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.283 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5511e511-2310-4811-8313-3722fcf49758 in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.286 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.288 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[83307a5d-791f-4fac-8d37-017793d45872]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.289 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace which is not needed anymore#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.305 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 29 03:25:20 np0005539552 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000075.scope: Consumed 25.941s CPU time.
Nov 29 03:25:20 np0005539552 systemd-machined[196379]: Machine qemu-49-instance-00000075 terminated.
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [NOTICE]   (282882) : haproxy version is 2.8.14-c23fe91
Nov 29 03:25:20 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [NOTICE]   (282882) : path to executable is /usr/sbin/haproxy
Nov 29 03:25:20 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [WARNING]  (282882) : Exiting Master process...
Nov 29 03:25:20 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [WARNING]  (282882) : Exiting Master process...
Nov 29 03:25:20 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [ALERT]    (282882) : Current worker (282884) exited with code 143 (Terminated)
Nov 29 03:25:20 np0005539552 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[282873]: [WARNING]  (282882) : All workers exited. Exiting... (0)
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.432 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 systemd[1]: libpod-d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a.scope: Deactivated successfully.
Nov 29 03:25:20 np0005539552 podman[289620]: 2025-11-29 08:25:20.439937299 +0000 UTC m=+0.047467118 container died d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.439 233728 INFO nova.virt.libvirt.driver [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Instance destroyed successfully.#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.440 233728 DEBUG nova.objects.instance [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid 07f760bf-6984-45e9-8e85-3d297e812553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.455 233728 DEBUG nova.virt.libvirt.vif [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1370062521',display_name='tempest-ServerActionsTestOtherA-server-1370062521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1370062521',id=117,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGZDzu/2PA5Jq1/mLvX2aaGG/WgUsRbb7Dsx3sFYSYL50dOuvFvn9ZiS3sRkHwVTZXl3/vg+NRcU0ds7Zzbdh2bvajGjb9Qxq1UtC5+8x+Wx/kUkrK3lVnVkeCLnrxzmbg==',key_name='tempest-keypair-186857524',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-o3h6qlt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1552f15deb524705a9456cbe9b54c429',uuid=07f760bf-6984-45e9-8e85-3d297e812553,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.456 233728 DEBUG nova.network.os_vif_util [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "5511e511-2310-4811-8313-3722fcf49758", "address": "fa:16:3e:f3:b9:47", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5511e511-23", "ovs_interfaceid": "5511e511-2310-4811-8313-3722fcf49758", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.457 233728 DEBUG nova.network.os_vif_util [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.458 233728 DEBUG os_vif [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.462 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.462 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5511e511-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:20 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.469 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:20 np0005539552 systemd[1]: var-lib-containers-storage-overlay-e63430c57f5c5c618653a2abbfb7602362c632d5722556f56102ab91da5ff6a2-merged.mount: Deactivated successfully.
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.472 233728 INFO os_vif [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:b9:47,bridge_name='br-int',has_traffic_filtering=True,id=5511e511-2310-4811-8313-3722fcf49758,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5511e511-23')#033[00m
Nov 29 03:25:20 np0005539552 podman[289620]: 2025-11-29 08:25:20.480579722 +0000 UTC m=+0.088109541 container cleanup d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:20 np0005539552 systemd[1]: libpod-conmon-d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a.scope: Deactivated successfully.
Nov 29 03:25:20 np0005539552 podman[289673]: 2025-11-29 08:25:20.540259138 +0000 UTC m=+0.035781554 container remove d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.548 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f9646b83-ec92-4145-a5c6-cc1f931d9937]: (4, ('Sat Nov 29 08:25:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a)\nd2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a\nSat Nov 29 08:25:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (d2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a)\nd2ba70ea3c1739564b202d605bbb50491b8340d5fbacc6f7c9bf72c7b1a2dd9a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.550 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcd5a17-5f47-4717-a7ac-2029e6d2251b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.551 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:20 np0005539552 kernel: tap7fc1dfc3-80: left promiscuous mode
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.569 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7552ba21-9294-4992-a227-342e59304889]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.581 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41684d71-cb03-448a-87f7-637ea1cee0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.582 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b484b3-23c4-4edc-a161-c542dd388153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.599 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4f71a6-e319-4e2a-aa5e-d614d633dfa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747474, 'reachable_time': 34912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289689, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.601 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.601 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[ca91c3f7-9272-4bee-bb4a-8d9b9e71155a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:20 np0005539552 systemd[1]: run-netns-ovnmeta\x2d7fc1dfc3\x2d8d7f\x2d4854\x2d980d\x2d37a93f366035.mount: Deactivated successfully.
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.632 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.632 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:20.632 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.724 233728 DEBUG nova.compute.manager [req-d601dfd1-7227-4567-826f-84d9afe144fe req-20d77ffc-16d6-4e14-a261-328d766f3aad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-vif-unplugged-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.725 233728 DEBUG oslo_concurrency.lockutils [req-d601dfd1-7227-4567-826f-84d9afe144fe req-20d77ffc-16d6-4e14-a261-328d766f3aad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.725 233728 DEBUG oslo_concurrency.lockutils [req-d601dfd1-7227-4567-826f-84d9afe144fe req-20d77ffc-16d6-4e14-a261-328d766f3aad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.726 233728 DEBUG oslo_concurrency.lockutils [req-d601dfd1-7227-4567-826f-84d9afe144fe req-20d77ffc-16d6-4e14-a261-328d766f3aad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.726 233728 DEBUG nova.compute.manager [req-d601dfd1-7227-4567-826f-84d9afe144fe req-20d77ffc-16d6-4e14-a261-328d766f3aad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] No waiting events found dispatching network-vif-unplugged-5511e511-2310-4811-8313-3722fcf49758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.726 233728 DEBUG nova.compute.manager [req-d601dfd1-7227-4567-826f-84d9afe144fe req-20d77ffc-16d6-4e14-a261-328d766f3aad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-vif-unplugged-5511e511-2310-4811-8313-3722fcf49758 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:25:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:20.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.919 233728 INFO nova.virt.libvirt.driver [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Deleting instance files /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553_del#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.922 233728 INFO nova.virt.libvirt.driver [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Deletion of /var/lib/nova/instances/07f760bf-6984-45e9-8e85-3d297e812553_del complete#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.969 233728 INFO nova.compute.manager [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.970 233728 DEBUG oslo.service.loopingcall [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.971 233728 DEBUG nova.compute.manager [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:25:20 np0005539552 nova_compute[233724]: 2025-11-29 08:25:20.971 233728 DEBUG nova.network.neutron [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.301 233728 DEBUG nova.network.neutron [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.328 233728 DEBUG oslo_concurrency.lockutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.525 233728 DEBUG nova.network.neutron [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.546 233728 INFO nova.compute.manager [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Took 0.57 seconds to deallocate network for instance.#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.600 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.601 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.618 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:25:21 np0005539552 nova_compute[233724]: 2025-11-29 08:25:21.662 233728 DEBUG oslo_concurrency.processutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2689413226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.147 233728 DEBUG oslo_concurrency.processutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.153 233728 DEBUG nova.compute.provider_tree [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.170 233728 DEBUG nova.scheduler.client.report [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.198 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.225 233728 INFO nova.scheduler.client.report [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Deleted allocations for instance 07f760bf-6984-45e9-8e85-3d297e812553#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.246 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.299 233728 DEBUG oslo_concurrency.lockutils [None req-b582e0cf-5da6-4d42-9797-a924044c5362 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.818 233728 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.819 233728 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "07f760bf-6984-45e9-8e85-3d297e812553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.819 233728 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.819 233728 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "07f760bf-6984-45e9-8e85-3d297e812553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.819 233728 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] No waiting events found dispatching network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.820 233728 WARNING nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received unexpected event network-vif-plugged-5511e511-2310-4811-8313-3722fcf49758 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:25:22 np0005539552 nova_compute[233724]: 2025-11-29 08:25:22.820 233728 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Received event network-vif-deleted-5511e511-2310-4811-8313-3722fcf49758 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:22.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:23.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:24.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:25 np0005539552 nova_compute[233724]: 2025-11-29 08:25:25.465 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:25.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:26 np0005539552 nova_compute[233724]: 2025-11-29 08:25:26.517 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:26 np0005539552 nova_compute[233724]: 2025-11-29 08:25:26.864 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404711.8623214, 2e7e8742-c504-412d-82cf-4087bd745c3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:26 np0005539552 nova_compute[233724]: 2025-11-29 08:25:26.865 233728 INFO nova.compute.manager [-] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:25:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:26.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:26 np0005539552 nova_compute[233724]: 2025-11-29 08:25:26.888 233728 DEBUG nova.compute.manager [None req-33ae0b37-3c08-43a4-9bba-b626ab74bab4 - - - - - -] [instance: 2e7e8742-c504-412d-82cf-4087bd745c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:27.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:29.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:30 np0005539552 nova_compute[233724]: 2025-11-29 08:25:30.468 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.667 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.787 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.806 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Triggering sync for uuid faaf0451-7134-4014-aa11-8019f10ffe8a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.807 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.807 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.807 233728 INFO nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.808 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:31.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.946 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.946 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:25:31 np0005539552 nova_compute[233724]: 2025-11-29 08:25:31.946 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3615627929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.353 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.436 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.436 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.597 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.599 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4165MB free_disk=20.866790771484375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.599 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.600 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.839 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance faaf0451-7134-4014-aa11-8019f10ffe8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.840 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.840 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:25:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:32 np0005539552 nova_compute[233724]: 2025-11-29 08:25:32.925 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2105695446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:33 np0005539552 nova_compute[233724]: 2025-11-29 08:25:33.408 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:33 np0005539552 nova_compute[233724]: 2025-11-29 08:25:33.415 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:33 np0005539552 nova_compute[233724]: 2025-11-29 08:25:33.455 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:33 np0005539552 nova_compute[233724]: 2025-11-29 08:25:33.541 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:25:33 np0005539552 nova_compute[233724]: 2025-11-29 08:25:33.541 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:33.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:34 np0005539552 kernel: tapf81081eb-fe (unregistering): left promiscuous mode
Nov 29 03:25:34 np0005539552 NetworkManager[48926]: <info>  [1764404734.4625] device (tapf81081eb-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.467 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:34Z|00591|binding|INFO|Releasing lport f81081eb-fee3-4706-a6df-e1400380a3be from this chassis (sb_readonly=0)
Nov 29 03:25:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:34Z|00592|binding|INFO|Setting lport f81081eb-fee3-4706-a6df-e1400380a3be down in Southbound
Nov 29 03:25:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:34Z|00593|binding|INFO|Removing iface tapf81081eb-fe ovn-installed in OVS
Nov 29 03:25:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:34.475 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:c5:a7 10.100.0.8'], port_security=['fa:16:3e:5e:c5:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'faaf0451-7134-4014-aa11-8019f10ffe8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3bd4137-d392-4145-85ca-267270babe0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd5ce4a2eb794cdd850dd88487f89b9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89135a4d-577c-4413-84fa-0487b26f7a91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40bb599-5a01-464e-b600-44bfc4dd511c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f81081eb-fee3-4706-a6df-e1400380a3be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:34.476 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f81081eb-fee3-4706-a6df-e1400380a3be in datapath f3bd4137-d392-4145-85ca-267270babe0f unbound from our chassis#033[00m
Nov 29 03:25:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:34.477 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f3bd4137-d392-4145-85ca-267270babe0f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:25:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:34.477 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2420ef0d-1058-4f8e-bbc3-bb057141e171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.483 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539552 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 29 03:25:34 np0005539552 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000086.scope: Consumed 13.759s CPU time.
Nov 29 03:25:34 np0005539552 systemd-machined[196379]: Machine qemu-58-instance-00000086 terminated.
Nov 29 03:25:34 np0005539552 kernel: tapf81081eb-fe: entered promiscuous mode
Nov 29 03:25:34 np0005539552 kernel: tapf81081eb-fe (unregistering): left promiscuous mode
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.696 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.713 233728 INFO nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.719 233728 INFO nova.virt.libvirt.driver [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance destroyed successfully.#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.720 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'numa_topology' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.726 233728 DEBUG nova.compute.manager [req-69d9cb14-35f9-4093-97e3-09d3e4eab583 req-c983378d-13ff-48d6-988e-a14e5b2eb845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-unplugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.727 233728 DEBUG oslo_concurrency.lockutils [req-69d9cb14-35f9-4093-97e3-09d3e4eab583 req-c983378d-13ff-48d6-988e-a14e5b2eb845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.727 233728 DEBUG oslo_concurrency.lockutils [req-69d9cb14-35f9-4093-97e3-09d3e4eab583 req-c983378d-13ff-48d6-988e-a14e5b2eb845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.727 233728 DEBUG oslo_concurrency.lockutils [req-69d9cb14-35f9-4093-97e3-09d3e4eab583 req-c983378d-13ff-48d6-988e-a14e5b2eb845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.727 233728 DEBUG nova.compute.manager [req-69d9cb14-35f9-4093-97e3-09d3e4eab583 req-c983378d-13ff-48d6-988e-a14e5b2eb845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-unplugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.727 233728 WARNING nova.compute.manager [req-69d9cb14-35f9-4093-97e3-09d3e4eab583 req-c983378d-13ff-48d6-988e-a14e5b2eb845 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received unexpected event network-vif-unplugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.742 233728 INFO nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Attempting rescue#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.742 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.748 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.749 233728 INFO nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Creating image(s)#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.771 233728 DEBUG nova.storage.rbd_utils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.775 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.821 233728 DEBUG nova.storage.rbd_utils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.844 233728 DEBUG nova.storage.rbd_utils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.848 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.911 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.912 233728 DEBUG oslo_concurrency.lockutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.912 233728 DEBUG oslo_concurrency.lockutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.913 233728 DEBUG oslo_concurrency.lockutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.933 233728 DEBUG nova.storage.rbd_utils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539552 nova_compute[233724]: 2025-11-29 08:25:34.936 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.217 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.218 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'migration_context' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.230 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.231 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Start _get_guest_xml network_info=[{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "vif_mac": "fa:16:3e:5e:c5:a7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4873db8c-b414-4e95-acd9-77caabebe722', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.231 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'resources' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.250 233728 WARNING nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.253 233728 DEBUG nova.virt.libvirt.host [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.254 233728 DEBUG nova.virt.libvirt.host [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.257 233728 DEBUG nova.virt.libvirt.host [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.258 233728 DEBUG nova.virt.libvirt.host [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.259 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.259 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.260 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.260 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.261 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.261 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.261 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.261 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.262 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.262 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.262 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.262 233728 DEBUG nova.virt.hardware [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.263 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.284 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.436 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404720.435274, 07f760bf-6984-45e9-8e85-3d297e812553 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.437 233728 INFO nova.compute.manager [-] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.460 233728 DEBUG nova.compute.manager [None req-f6b21864-92ee-4e53-8dbc-5e6faab4fc27 - - - - - -] [instance: 07f760bf-6984-45e9-8e85-3d297e812553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4011522939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.744 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:35 np0005539552 nova_compute[233724]: 2025-11-29 08:25:35.745 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2991465619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.164 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.165 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.524 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1372825920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.594 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.595 233728 DEBUG nova.virt.libvirt.vif [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-157621080',display_name='tempest-ServerRescueTestJSONUnderV235-server-157621080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-157621080',id=134,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd5ce4a2eb794cdd850dd88487f89b9a',ramdisk_id='',reservation_id='r-b4txo28s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1385777035',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1385777035-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:18Z,user_data=None,user_id='e5346d862b0f4465aa9162f206696903',uuid=faaf0451-7134-4014-aa11-8019f10ffe8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "vif_mac": "fa:16:3e:5e:c5:a7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.595 233728 DEBUG nova.network.os_vif_util [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converting VIF {"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "vif_mac": "fa:16:3e:5e:c5:a7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.596 233728 DEBUG nova.network.os_vif_util [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.597 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'pci_devices' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.642 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <uuid>faaf0451-7134-4014-aa11-8019f10ffe8a</uuid>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <name>instance-00000086</name>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-157621080</nova:name>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:25:35</nova:creationTime>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:user uuid="e5346d862b0f4465aa9162f206696903">tempest-ServerRescueTestJSONUnderV235-1385777035-project-member</nova:user>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:project uuid="dd5ce4a2eb794cdd850dd88487f89b9a">tempest-ServerRescueTestJSONUnderV235-1385777035</nova:project>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <nova:port uuid="f81081eb-fee3-4706-a6df-e1400380a3be">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <entry name="serial">faaf0451-7134-4014-aa11-8019f10ffe8a</entry>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <entry name="uuid">faaf0451-7134-4014-aa11-8019f10ffe8a</entry>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/faaf0451-7134-4014-aa11-8019f10ffe8a_disk.rescue">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/faaf0451-7134-4014-aa11-8019f10ffe8a_disk">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config.rescue">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:5e:c5:a7"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <target dev="tapf81081eb-fe"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/console.log" append="off"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:25:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:25:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:25:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:25:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.649 233728 INFO nova.virt.libvirt.driver [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance destroyed successfully.#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.730 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.731 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.732 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.732 233728 DEBUG nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] No VIF found with MAC fa:16:3e:5e:c5:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.732 233728 INFO nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Using config drive#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.766 233728 DEBUG nova.storage.rbd_utils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.810 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'ec2_ids' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.881 233728 DEBUG nova.objects.instance [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'keypairs' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:36.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.967 233728 DEBUG nova.compute.manager [req-b7165df0-5af1-4229-a947-7a6089b3726f req-5d7c13df-d491-48bc-8363-408d34e20c78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.968 233728 DEBUG oslo_concurrency.lockutils [req-b7165df0-5af1-4229-a947-7a6089b3726f req-5d7c13df-d491-48bc-8363-408d34e20c78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.968 233728 DEBUG oslo_concurrency.lockutils [req-b7165df0-5af1-4229-a947-7a6089b3726f req-5d7c13df-d491-48bc-8363-408d34e20c78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.968 233728 DEBUG oslo_concurrency.lockutils [req-b7165df0-5af1-4229-a947-7a6089b3726f req-5d7c13df-d491-48bc-8363-408d34e20c78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.968 233728 DEBUG nova.compute.manager [req-b7165df0-5af1-4229-a947-7a6089b3726f req-5d7c13df-d491-48bc-8363-408d34e20c78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:36 np0005539552 nova_compute[233724]: 2025-11-29 08:25:36.969 233728 WARNING nova.compute.manager [req-b7165df0-5af1-4229-a947-7a6089b3726f req-5d7c13df-d491-48bc-8363-408d34e20c78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received unexpected event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.542 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.542 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.643 233728 INFO nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Creating config drive at /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config.rescue#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.653 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdhmwcxv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.794 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdhmwcxv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.827 233728 DEBUG nova.storage.rbd_utils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] rbd image faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.831 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config.rescue faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539552 nova_compute[233724]: 2025-11-29 08:25:37.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.014 233728 DEBUG oslo_concurrency.processutils [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config.rescue faaf0451-7134-4014-aa11-8019f10ffe8a_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.015 233728 INFO nova.virt.libvirt.driver [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Deleting local config drive /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:25:38 np0005539552 kernel: tapf81081eb-fe: entered promiscuous mode
Nov 29 03:25:38 np0005539552 NetworkManager[48926]: <info>  [1764404738.0705] manager: (tapf81081eb-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.069 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:38Z|00594|binding|INFO|Claiming lport f81081eb-fee3-4706-a6df-e1400380a3be for this chassis.
Nov 29 03:25:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:38Z|00595|binding|INFO|f81081eb-fee3-4706-a6df-e1400380a3be: Claiming fa:16:3e:5e:c5:a7 10.100.0.8
Nov 29 03:25:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:38.075 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:c5:a7 10.100.0.8'], port_security=['fa:16:3e:5e:c5:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'faaf0451-7134-4014-aa11-8019f10ffe8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3bd4137-d392-4145-85ca-267270babe0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd5ce4a2eb794cdd850dd88487f89b9a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '89135a4d-577c-4413-84fa-0487b26f7a91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40bb599-5a01-464e-b600-44bfc4dd511c, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f81081eb-fee3-4706-a6df-e1400380a3be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:38.077 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f81081eb-fee3-4706-a6df-e1400380a3be in datapath f3bd4137-d392-4145-85ca-267270babe0f bound to our chassis#033[00m
Nov 29 03:25:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:38.077 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f3bd4137-d392-4145-85ca-267270babe0f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:25:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:38.078 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[94305efb-ae71-41eb-8a5a-0f3f1973611a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:38Z|00596|binding|INFO|Setting lport f81081eb-fee3-4706-a6df-e1400380a3be ovn-installed in OVS
Nov 29 03:25:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:38Z|00597|binding|INFO|Setting lport f81081eb-fee3-4706-a6df-e1400380a3be up in Southbound
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.087 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.093 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539552 systemd-udevd[290019]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:38 np0005539552 systemd-machined[196379]: New machine qemu-59-instance-00000086.
Nov 29 03:25:38 np0005539552 NetworkManager[48926]: <info>  [1764404738.1100] device (tapf81081eb-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:38 np0005539552 NetworkManager[48926]: <info>  [1764404738.1115] device (tapf81081eb-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:38 np0005539552 systemd[1]: Started Virtual Machine qemu-59-instance-00000086.
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.649 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for faaf0451-7134-4014-aa11-8019f10ffe8a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.650 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404738.6491416, faaf0451-7134-4014-aa11-8019f10ffe8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.651 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.655 233728 DEBUG nova.compute.manager [None req-5ef31313-fbb7-4bae-99f1-7ab20c11ef75 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.714 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.717 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.772 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.773 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404738.6504571, faaf0451-7134-4014-aa11-8019f10ffe8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.773 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.800 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.804 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:38.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:38 np0005539552 nova_compute[233724]: 2025-11-29 08:25:38.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:25:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993021273' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:25:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:25:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993021273' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:25:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:39.640 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:39 np0005539552 nova_compute[233724]: 2025-11-29 08:25:39.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:39.641 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:25:39 np0005539552 nova_compute[233724]: 2025-11-29 08:25:39.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:39 np0005539552 nova_compute[233724]: 2025-11-29 08:25:39.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:40 np0005539552 nova_compute[233724]: 2025-11-29 08:25:40.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:40.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:40 np0005539552 nova_compute[233724]: 2025-11-29 08:25:40.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.368 233728 DEBUG nova.compute.manager [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.369 233728 DEBUG nova.compute.manager [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing instance network info cache due to event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.369 233728 DEBUG oslo_concurrency.lockutils [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.369 233728 DEBUG oslo_concurrency.lockutils [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.369 233728 DEBUG nova.network.neutron [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:41 np0005539552 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:25:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:41.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:41 np0005539552 nova_compute[233724]: 2025-11-29 08:25:41.974 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:25:42 np0005539552 nova_compute[233724]: 2025-11-29 08:25:42.565 233728 DEBUG nova.network.neutron [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updated VIF entry in instance network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:42 np0005539552 nova_compute[233724]: 2025-11-29 08:25:42.567 233728 DEBUG nova.network.neutron [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:42 np0005539552 nova_compute[233724]: 2025-11-29 08:25:42.595 233728 DEBUG oslo_concurrency.lockutils [req-7f92a3f7-2f38-4edb-874f-2917969ad727 req-f70cb8d8-dfab-4217-91d2-6500575b3337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:42.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:42 np0005539552 nova_compute[233724]: 2025-11-29 08:25:42.970 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 03:25:43 np0005539552 nova_compute[233724]: 2025-11-29 08:25:43.527 233728 DEBUG nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:43 np0005539552 nova_compute[233724]: 2025-11-29 08:25:43.528 233728 DEBUG nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing instance network info cache due to event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:43 np0005539552 nova_compute[233724]: 2025-11-29 08:25:43.528 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:43 np0005539552 nova_compute[233724]: 2025-11-29 08:25:43.528 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:43 np0005539552 nova_compute[233724]: 2025-11-29 08:25:43.528 233728 DEBUG nova.network.neutron [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 03:25:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:44.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:44 np0005539552 podman[290146]: 2025-11-29 08:25:44.983511147 +0000 UTC m=+0.061346531 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 03:25:44 np0005539552 podman[290145]: 2025-11-29 08:25:44.987304939 +0000 UTC m=+0.069151651 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:45 np0005539552 podman[290147]: 2025-11-29 08:25:45.014315016 +0000 UTC m=+0.100320180 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.455 233728 DEBUG nova.network.neutron [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updated VIF entry in instance network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.456 233728 DEBUG nova.network.neutron [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.477 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.477 233728 DEBUG nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.477 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.478 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.478 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.478 233728 DEBUG nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.478 233728 WARNING nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received unexpected event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.478 233728 DEBUG nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.479 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.479 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.479 233728 DEBUG oslo_concurrency.lockutils [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.479 233728 DEBUG nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.479 233728 WARNING nova.compute.manager [req-f403ff39-d17f-466b-8f9c-b644c4fbcf66 req-cce7f5b5-cc04-407f-919c-c83979b22559 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received unexpected event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:45 np0005539552 nova_compute[233724]: 2025-11-29 08:25:45.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.005 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539552 NetworkManager[48926]: <info>  [1764404746.0067] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 29 03:25:46 np0005539552 NetworkManager[48926]: <info>  [1764404746.0081] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.164 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.180 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.529 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.554 233728 DEBUG nova.compute.manager [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.555 233728 DEBUG nova.compute.manager [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing instance network info cache due to event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.555 233728 DEBUG oslo_concurrency.lockutils [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.555 233728 DEBUG oslo_concurrency.lockutils [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:46 np0005539552 nova_compute[233724]: 2025-11-29 08:25:46.556 233728 DEBUG nova.network.neutron [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:46.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:48 np0005539552 nova_compute[233724]: 2025-11-29 08:25:48.144 233728 DEBUG nova.network.neutron [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updated VIF entry in instance network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:48 np0005539552 nova_compute[233724]: 2025-11-29 08:25:48.145 233728 DEBUG nova.network.neutron [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:48 np0005539552 nova_compute[233724]: 2025-11-29 08:25:48.170 233728 DEBUG oslo_concurrency.lockutils [req-2210871c-6e7a-47b8-9a7c-8c913e5f6e76 req-dec3f594-d9c6-4e44-87fa-192f5cb5af10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:49.643 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:49.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:50 np0005539552 nova_compute[233724]: 2025-11-29 08:25:50.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:50 np0005539552 nova_compute[233724]: 2025-11-29 08:25:50.632 233728 DEBUG nova.compute.manager [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:50 np0005539552 nova_compute[233724]: 2025-11-29 08:25:50.632 233728 DEBUG nova.compute.manager [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing instance network info cache due to event network-changed-f81081eb-fee3-4706-a6df-e1400380a3be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:50 np0005539552 nova_compute[233724]: 2025-11-29 08:25:50.632 233728 DEBUG oslo_concurrency.lockutils [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:50 np0005539552 nova_compute[233724]: 2025-11-29 08:25:50.632 233728 DEBUG oslo_concurrency.lockutils [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:50 np0005539552 nova_compute[233724]: 2025-11-29 08:25:50.633 233728 DEBUG nova.network.neutron [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Refreshing network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:51 np0005539552 nova_compute[233724]: 2025-11-29 08:25:51.531 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:25:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.3 total, 600.0 interval#012Cumulative writes: 48K writes, 197K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.05 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.77 writes per sync, written: 0.19 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 47K keys, 11K commit groups, 1.0 writes per commit group, ingest: 44.58 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4710 syncs, 2.49 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:25:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:51.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 03:25:52 np0005539552 nova_compute[233724]: 2025-11-29 08:25:52.297 233728 DEBUG nova.network.neutron [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updated VIF entry in instance network info cache for port f81081eb-fee3-4706-a6df-e1400380a3be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:52 np0005539552 nova_compute[233724]: 2025-11-29 08:25:52.298 233728 DEBUG nova.network.neutron [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [{"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:52 np0005539552 nova_compute[233724]: 2025-11-29 08:25:52.472 233728 DEBUG oslo_concurrency.lockutils [req-6ec29450-0dbc-4fed-8abf-a9954305bf8c req-f6f7bf8f-6d1b-4f3e-baea-58f887331219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-faaf0451-7134-4014-aa11-8019f10ffe8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:53.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:55 np0005539552 nova_compute[233724]: 2025-11-29 08:25:55.483 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:55.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:56 np0005539552 nova_compute[233724]: 2025-11-29 08:25:56.533 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.096 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.097 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.097 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.098 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.098 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.100 233728 INFO nova.compute.manager [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Terminating instance#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.102 233728 DEBUG nova.compute.manager [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:25:57 np0005539552 kernel: tapf81081eb-fe (unregistering): left promiscuous mode
Nov 29 03:25:57 np0005539552 NetworkManager[48926]: <info>  [1764404757.1798] device (tapf81081eb-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:57Z|00598|binding|INFO|Releasing lport f81081eb-fee3-4706-a6df-e1400380a3be from this chassis (sb_readonly=0)
Nov 29 03:25:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:57Z|00599|binding|INFO|Setting lport f81081eb-fee3-4706-a6df-e1400380a3be down in Southbound
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.191 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:25:57Z|00600|binding|INFO|Removing iface tapf81081eb-fe ovn-installed in OVS
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:57.200 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:c5:a7 10.100.0.8'], port_security=['fa:16:3e:5e:c5:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'faaf0451-7134-4014-aa11-8019f10ffe8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3bd4137-d392-4145-85ca-267270babe0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd5ce4a2eb794cdd850dd88487f89b9a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '89135a4d-577c-4413-84fa-0487b26f7a91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40bb599-5a01-464e-b600-44bfc4dd511c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=f81081eb-fee3-4706-a6df-e1400380a3be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:57.201 143400 INFO neutron.agent.ovn.metadata.agent [-] Port f81081eb-fee3-4706-a6df-e1400380a3be in datapath f3bd4137-d392-4145-85ca-267270babe0f unbound from our chassis#033[00m
Nov 29 03:25:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:57.202 143400 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f3bd4137-d392-4145-85ca-267270babe0f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:25:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:25:57.203 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8b869838-76e5-4bd0-8658-fce77def0500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.230 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:57 np0005539552 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 29 03:25:57 np0005539552 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000086.scope: Consumed 13.852s CPU time.
Nov 29 03:25:57 np0005539552 systemd-machined[196379]: Machine qemu-59-instance-00000086 terminated.
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.339 233728 INFO nova.virt.libvirt.driver [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Instance destroyed successfully.#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.340 233728 DEBUG nova.objects.instance [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lazy-loading 'resources' on Instance uuid faaf0451-7134-4014-aa11-8019f10ffe8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.365 233728 DEBUG nova.virt.libvirt.vif [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-157621080',display_name='tempest-ServerRescueTestJSONUnderV235-server-157621080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-157621080',id=134,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd5ce4a2eb794cdd850dd88487f89b9a',ramdisk_id='',reservation_id='r-b4txo28s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1385777035',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1385777035-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:38Z,user_data=None,user_id='e5346d862b0f4465aa9162f206696903',uuid=faaf0451-7134-4014-aa11-8019f10ffe8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.366 233728 DEBUG nova.network.os_vif_util [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converting VIF {"id": "f81081eb-fee3-4706-a6df-e1400380a3be", "address": "fa:16:3e:5e:c5:a7", "network": {"id": "f3bd4137-d392-4145-85ca-267270babe0f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-257813476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "dd5ce4a2eb794cdd850dd88487f89b9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf81081eb-fe", "ovs_interfaceid": "f81081eb-fee3-4706-a6df-e1400380a3be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.366 233728 DEBUG nova.network.os_vif_util [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.367 233728 DEBUG os_vif [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.368 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.369 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf81081eb-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.370 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.373 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.376 233728 INFO os_vif [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:c5:a7,bridge_name='br-int',has_traffic_filtering=True,id=f81081eb-fee3-4706-a6df-e1400380a3be,network=Network(f3bd4137-d392-4145-85ca-267270babe0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf81081eb-fe')#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.660 233728 DEBUG nova.compute.manager [req-547cd0a2-0f90-4afa-ad00-258cd3efbe1d req-0f632b9a-d729-41ec-96ff-2c4a6bb1973b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-unplugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.661 233728 DEBUG oslo_concurrency.lockutils [req-547cd0a2-0f90-4afa-ad00-258cd3efbe1d req-0f632b9a-d729-41ec-96ff-2c4a6bb1973b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.661 233728 DEBUG oslo_concurrency.lockutils [req-547cd0a2-0f90-4afa-ad00-258cd3efbe1d req-0f632b9a-d729-41ec-96ff-2c4a6bb1973b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.662 233728 DEBUG oslo_concurrency.lockutils [req-547cd0a2-0f90-4afa-ad00-258cd3efbe1d req-0f632b9a-d729-41ec-96ff-2c4a6bb1973b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.662 233728 DEBUG nova.compute.manager [req-547cd0a2-0f90-4afa-ad00-258cd3efbe1d req-0f632b9a-d729-41ec-96ff-2c4a6bb1973b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-unplugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:57 np0005539552 nova_compute[233724]: 2025-11-29 08:25:57.662 233728 DEBUG nova.compute.manager [req-547cd0a2-0f90-4afa-ad00-258cd3efbe1d req-0f632b9a-d729-41ec-96ff-2c4a6bb1973b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-unplugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:25:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:57.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:58 np0005539552 nova_compute[233724]: 2025-11-29 08:25:58.778 233728 INFO nova.virt.libvirt.driver [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Deleting instance files /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a_del#033[00m
Nov 29 03:25:58 np0005539552 nova_compute[233724]: 2025-11-29 08:25:58.780 233728 INFO nova.virt.libvirt.driver [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Deletion of /var/lib/nova/instances/faaf0451-7134-4014-aa11-8019f10ffe8a_del complete#033[00m
Nov 29 03:25:58 np0005539552 nova_compute[233724]: 2025-11-29 08:25:58.844 233728 INFO nova.compute.manager [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:25:58 np0005539552 nova_compute[233724]: 2025-11-29 08:25:58.845 233728 DEBUG oslo.service.loopingcall [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:25:58 np0005539552 nova_compute[233724]: 2025-11-29 08:25:58.846 233728 DEBUG nova.compute.manager [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:25:58 np0005539552 nova_compute[233724]: 2025-11-29 08:25:58.846 233728 DEBUG nova.network.neutron [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:25:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:58.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:25:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:59 np0005539552 nova_compute[233724]: 2025-11-29 08:25:59.975 233728 DEBUG nova.compute.manager [req-1212ed5c-f399-4d86-8499-c90828431744 req-263a9557-5fbe-4910-99b0-0853f3834c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:59 np0005539552 nova_compute[233724]: 2025-11-29 08:25:59.975 233728 DEBUG oslo_concurrency.lockutils [req-1212ed5c-f399-4d86-8499-c90828431744 req-263a9557-5fbe-4910-99b0-0853f3834c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:59 np0005539552 nova_compute[233724]: 2025-11-29 08:25:59.975 233728 DEBUG oslo_concurrency.lockutils [req-1212ed5c-f399-4d86-8499-c90828431744 req-263a9557-5fbe-4910-99b0-0853f3834c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:59 np0005539552 nova_compute[233724]: 2025-11-29 08:25:59.976 233728 DEBUG oslo_concurrency.lockutils [req-1212ed5c-f399-4d86-8499-c90828431744 req-263a9557-5fbe-4910-99b0-0853f3834c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:59 np0005539552 nova_compute[233724]: 2025-11-29 08:25:59.976 233728 DEBUG nova.compute.manager [req-1212ed5c-f399-4d86-8499-c90828431744 req-263a9557-5fbe-4910-99b0-0853f3834c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] No waiting events found dispatching network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:59 np0005539552 nova_compute[233724]: 2025-11-29 08:25:59.976 233728 WARNING nova.compute.manager [req-1212ed5c-f399-4d86-8499-c90828431744 req-263a9557-5fbe-4910-99b0-0853f3834c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received unexpected event network-vif-plugged-f81081eb-fee3-4706-a6df-e1400380a3be for instance with vm_state rescued and task_state deleting.#033[00m
Nov 29 03:26:00 np0005539552 nova_compute[233724]: 2025-11-29 08:26:00.475 233728 DEBUG nova.network.neutron [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:00 np0005539552 nova_compute[233724]: 2025-11-29 08:26:00.494 233728 INFO nova.compute.manager [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Took 1.65 seconds to deallocate network for instance.#033[00m
Nov 29 03:26:00 np0005539552 nova_compute[233724]: 2025-11-29 08:26:00.560 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:00 np0005539552 nova_compute[233724]: 2025-11-29 08:26:00.561 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:00 np0005539552 nova_compute[233724]: 2025-11-29 08:26:00.625 233728 DEBUG oslo_concurrency.processutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:00.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:00 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:26:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1229242631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.071 233728 DEBUG oslo_concurrency.processutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.077 233728 DEBUG nova.compute.provider_tree [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.101 233728 DEBUG nova.scheduler.client.report [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.122 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.150 233728 INFO nova.scheduler.client.report [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Deleted allocations for instance faaf0451-7134-4014-aa11-8019f10ffe8a#033[00m
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.221 233728 DEBUG oslo_concurrency.lockutils [None req-00d0c8c5-f51d-4967-84b4-99da1bf18ae7 e5346d862b0f4465aa9162f206696903 dd5ce4a2eb794cdd850dd88487f89b9a - - default default] Lock "faaf0451-7134-4014-aa11-8019f10ffe8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:01 np0005539552 nova_compute[233724]: 2025-11-29 08:26:01.535 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:02 np0005539552 nova_compute[233724]: 2025-11-29 08:26:02.102 233728 DEBUG nova.compute.manager [req-ec6ca296-3281-45df-9dca-01d43fc37a37 req-a7c08c1c-3efc-44b2-846f-fa92138edca7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Received event network-vif-deleted-f81081eb-fee3-4706-a6df-e1400380a3be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:02 np0005539552 nova_compute[233724]: 2025-11-29 08:26:02.370 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:02.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:03.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:04 np0005539552 nova_compute[233724]: 2025-11-29 08:26:04.144 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:04 np0005539552 nova_compute[233724]: 2025-11-29 08:26:04.389 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:04.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:05.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:06 np0005539552 nova_compute[233724]: 2025-11-29 08:26:06.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539552 nova_compute[233724]: 2025-11-29 08:26:07.372 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:07.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:08.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:09.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:10.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:11 np0005539552 nova_compute[233724]: 2025-11-29 08:26:11.539 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:11.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:12 np0005539552 nova_compute[233724]: 2025-11-29 08:26:12.338 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404757.3371675, faaf0451-7134-4014-aa11-8019f10ffe8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:12 np0005539552 nova_compute[233724]: 2025-11-29 08:26:12.339 233728 INFO nova.compute.manager [-] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:26:12 np0005539552 nova_compute[233724]: 2025-11-29 08:26:12.358 233728 DEBUG nova.compute.manager [None req-407216c3-27b0-478c-ba37-e35578b9d3d3 - - - - - -] [instance: faaf0451-7134-4014-aa11-8019f10ffe8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:12 np0005539552 nova_compute[233724]: 2025-11-29 08:26:12.373 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:12.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.144 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.145 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.167 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.262 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.263 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.277 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.277 233728 INFO nova.compute.claims [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.418 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1486396666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.898 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.905 233728 DEBUG nova.compute.provider_tree [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.927 233728 DEBUG nova.scheduler.client.report [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.953 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.954 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:26:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:13.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.995 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:26:13 np0005539552 nova_compute[233724]: 2025-11-29 08:26:13.996 233728 DEBUG nova.network.neutron [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.018 233728 INFO nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.039 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.138 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.139 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.140 233728 INFO nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Creating image(s)#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.174 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.210 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.244 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.250 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.291 233728 DEBUG nova.policy [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '873186539acb4bf9b90513e0e1beb56f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.352 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.353 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.353 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.354 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.387 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.392 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 50daa6f5-6598-439f-a542-38e8ae7aded0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.737 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 50daa6f5-6598-439f-a542-38e8ae7aded0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.811 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] resizing rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.929 233728 DEBUG nova.objects.instance [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.942 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.943 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Ensure instance console log exists: /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.943 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.944 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:14 np0005539552 nova_compute[233724]: 2025-11-29 08:26:14.944 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:14.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.093 233728 DEBUG nova.network.neutron [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Successfully created port: 91d5abbf-fd67-487f-bfaa-448b1daa5272 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.793 233728 DEBUG nova.network.neutron [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Successfully updated port: 91d5abbf-fd67-487f-bfaa-448b1daa5272 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.811 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.811 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.812 233728 DEBUG nova.network.neutron [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.898 233728 DEBUG nova.compute.manager [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.899 233728 DEBUG nova.compute.manager [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing instance network info cache due to event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.899 233728 DEBUG oslo_concurrency.lockutils [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:15 np0005539552 nova_compute[233724]: 2025-11-29 08:26:15.969 233728 DEBUG nova.network.neutron [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:26:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:15.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:16 np0005539552 podman[290536]: 2025-11-29 08:26:16.025897759 +0000 UTC m=+0.099190180 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:26:16 np0005539552 podman[290537]: 2025-11-29 08:26:16.035473687 +0000 UTC m=+0.106871877 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 03:26:16 np0005539552 podman[290538]: 2025-11-29 08:26:16.053186943 +0000 UTC m=+0.125734464 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.541 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.890 233728 DEBUG nova.network.neutron [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.916 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.917 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance network_info: |[{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.917 233728 DEBUG oslo_concurrency.lockutils [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.917 233728 DEBUG nova.network.neutron [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.920 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Start _get_guest_xml network_info=[{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.925 233728 WARNING nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.933 233728 DEBUG nova.virt.libvirt.host [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.934 233728 DEBUG nova.virt.libvirt.host [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.937 233728 DEBUG nova.virt.libvirt.host [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.937 233728 DEBUG nova.virt.libvirt.host [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.938 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.938 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.939 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.940 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.940 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.940 233728 DEBUG nova.virt.hardware [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:16 np0005539552 nova_compute[233724]: 2025-11-29 08:26:16.943 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.374 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/248724967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.397 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.431 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.436 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/482426941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.918 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.921 233728 DEBUG nova.virt.libvirt.vif [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-419138925',display_name='tempest-ServerStableDeviceRescueTest-server-419138925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-419138925',id=138,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDT3yKh1OQM/APLMkF69FKCCA5nBzuP29507Q5By6a2JvA70+RYIsFvSoa6+Z7yspJ+R+0ak0hPbMHRp8sVSoGCRfzVOzgUhwEXFYk/q/u+LWAX+bPUz2Gc3J/eOzCbxKw==',key_name='tempest-keypair-1950933442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-c9jn2vfu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='873186539acb4bf9b90513e0e1beb56f',uuid=50daa6f5-6598-439f-a542-38e8ae7aded0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.921 233728 DEBUG nova.network.os_vif_util [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.923 233728 DEBUG nova.network.os_vif_util [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.925 233728 DEBUG nova.objects.instance [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.941 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <uuid>50daa6f5-6598-439f-a542-38e8ae7aded0</uuid>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <name>instance-0000008a</name>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-419138925</nova:name>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:26:16</nova:creationTime>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <nova:port uuid="91d5abbf-fd67-487f-bfaa-448b1daa5272">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <entry name="serial">50daa6f5-6598-439f-a542-38e8ae7aded0</entry>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <entry name="uuid">50daa6f5-6598-439f-a542-38e8ae7aded0</entry>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/50daa6f5-6598-439f-a542-38e8ae7aded0_disk">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:71:14:96"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <target dev="tap91d5abbf-fd"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/console.log" append="off"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:26:17 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:26:17 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:26:17 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:26:17 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.942 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Preparing to wait for external event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.943 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.944 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.944 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.945 233728 DEBUG nova.virt.libvirt.vif [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-419138925',display_name='tempest-ServerStableDeviceRescueTest-server-419138925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-419138925',id=138,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDT3yKh1OQM/APLMkF69FKCCA5nBzuP29507Q5By6a2JvA70+RYIsFvSoa6+Z7yspJ+R+0ak0hPbMHRp8sVSoGCRfzVOzgUhwEXFYk/q/u+LWAX+bPUz2Gc3J/eOzCbxKw==',key_name='tempest-keypair-1950933442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-c9jn2vfu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='873186539acb4bf9b90513e0e1beb56f',uuid=50daa6f5-6598-439f-a542-38e8ae7aded0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.945 233728 DEBUG nova.network.os_vif_util [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.946 233728 DEBUG nova.network.os_vif_util [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.947 233728 DEBUG os_vif [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.948 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.948 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.949 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.954 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.955 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91d5abbf-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.955 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91d5abbf-fd, col_values=(('external_ids', {'iface-id': '91d5abbf-fd67-487f-bfaa-448b1daa5272', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:14:96', 'vm-uuid': '50daa6f5-6598-439f-a542-38e8ae7aded0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.957 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:17 np0005539552 NetworkManager[48926]: <info>  [1764404777.9582] manager: (tap91d5abbf-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.967 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:17 np0005539552 nova_compute[233724]: 2025-11-29 08:26:17.969 233728 INFO os_vif [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd')#033[00m
Nov 29 03:26:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:17.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.053 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.055 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.055 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:71:14:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.056 233728 INFO nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Using config drive#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.084 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.290 233728 DEBUG nova.network.neutron [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updated VIF entry in instance network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.291 233728 DEBUG nova.network.neutron [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.306 233728 DEBUG oslo_concurrency.lockutils [req-31818d60-6e08-4c83-a7a4-9f670c44ff46 req-0303c205-4bbd-461c-b4b7-1ba48c85321f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.489 233728 INFO nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Creating config drive at /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.503 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9zz64kl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.661 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9zz64kl" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.709 233728 DEBUG nova.storage.rbd_utils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.714 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.873 233728 DEBUG oslo_concurrency.processutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.874 233728 INFO nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Deleting local config drive /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:26:18 np0005539552 kernel: tap91d5abbf-fd: entered promiscuous mode
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:18Z|00601|binding|INFO|Claiming lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 for this chassis.
Nov 29 03:26:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:18Z|00602|binding|INFO|91d5abbf-fd67-487f-bfaa-448b1daa5272: Claiming fa:16:3e:71:14:96 10.100.0.4
Nov 29 03:26:18 np0005539552 NetworkManager[48926]: <info>  [1764404778.9428] manager: (tap91d5abbf-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.944 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:18 np0005539552 nova_compute[233724]: 2025-11-29 08:26:18.951 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.960 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:14:96 10.100.0.4'], port_security=['fa:16:3e:71:14:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b3bcb98f-0ed3-4c70-97d6-4df1974ced71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=91d5abbf-fd67-487f-bfaa-448b1daa5272) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.961 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 91d5abbf-fd67-487f-bfaa-448b1daa5272 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.962 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:26:18 np0005539552 systemd-udevd[290736]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:18 np0005539552 systemd-machined[196379]: New machine qemu-60-instance-0000008a.
Nov 29 03:26:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:18.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.977 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f51e0746-d0a9-43c7-8779-2d2e69c1fc6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.978 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.980 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.981 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d7e176-714e-4a30-9dc8-6c116a9e9f97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.981 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c03f5789-db52-4ffb-9a8e-68010ac7a09d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:18 np0005539552 NetworkManager[48926]: <info>  [1764404778.9878] device (tap91d5abbf-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:18 np0005539552 NetworkManager[48926]: <info>  [1764404778.9891] device (tap91d5abbf-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:18 np0005539552 systemd[1]: Started Virtual Machine qemu-60-instance-0000008a.
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:18.999 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[218dabfa-9e71-44a4-b9e9-85836c5fdbff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.017 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:19Z|00603|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 ovn-installed in OVS
Nov 29 03:26:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:19Z|00604|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 up in Southbound
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.022 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.025 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbe1316-06a7-4f8b-bd81-9650625976a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.058 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd8e4f3-75d6-40b7-bd32-247e2e748867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 systemd-udevd[290739]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:19 np0005539552 NetworkManager[48926]: <info>  [1764404779.0667] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.065 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c1218be1-46ff-4cd7-bbdb-8ab09bbe0a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.101 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[53d7512d-6b74-42e3-bed7-a9a8945aa544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.104 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[494f8df1-1abd-4033-b174-7f5ed2e5ae2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 NetworkManager[48926]: <info>  [1764404779.1280] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.133 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8e10494a-6e9b-4e0b-bef9-f3bbdab02f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.147 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b03df2-9d53-4e90-b7c1-9fcbf31e289c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782481, 'reachable_time': 30484, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290768, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.167 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[88568f08-4a81-4fa8-b15b-d5fe80cf9a04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782481, 'tstamp': 782481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290769, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.183 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.183 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.193 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[23cb5298-91e4-47b0-be9c-efd3bd6a20ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782481, 'reachable_time': 30484, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290770, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.198 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.228 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bc1261-f063-4509-becb-4e6fbc9c85bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.283 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.283 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.291 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.292 233728 INFO nova.compute.claims [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.305 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e53bc8b4-49ab-4a55-b5b3-10a364b3d496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.306 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.307 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.307 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:19 np0005539552 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.309 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539552 NetworkManager[48926]: <info>  [1764404779.3103] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.311 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.312 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:19Z|00605|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.315 233728 DEBUG nova.compute.manager [req-9e033f4c-b4f1-4efe-9edc-2ba369114cda req-ae7d8d39-a430-428d-a337-1edbca3d3a9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.315 233728 DEBUG oslo_concurrency.lockutils [req-9e033f4c-b4f1-4efe-9edc-2ba369114cda req-ae7d8d39-a430-428d-a337-1edbca3d3a9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.316 233728 DEBUG oslo_concurrency.lockutils [req-9e033f4c-b4f1-4efe-9edc-2ba369114cda req-ae7d8d39-a430-428d-a337-1edbca3d3a9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.316 233728 DEBUG oslo_concurrency.lockutils [req-9e033f4c-b4f1-4efe-9edc-2ba369114cda req-ae7d8d39-a430-428d-a337-1edbca3d3a9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.316 233728 DEBUG nova.compute.manager [req-9e033f4c-b4f1-4efe-9edc-2ba369114cda req-ae7d8d39-a430-428d-a337-1edbca3d3a9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Processing event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.317 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.332 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.333 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.334 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c648dac1-c94f-4b24-b9b8-15acd346ca4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.334 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:19.335 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.374 233728 DEBUG nova.scheduler.client.report [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.388 233728 DEBUG nova.scheduler.client.report [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.388 233728 DEBUG nova.compute.provider_tree [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.412 233728 DEBUG nova.scheduler.client.report [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.451 233728 DEBUG nova.scheduler.client.report [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.500 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:19 np0005539552 podman[290820]: 2025-11-29 08:26:19.712286374 +0000 UTC m=+0.050347466 container create 718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:26:19 np0005539552 systemd[1]: Started libpod-conmon-718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea.scope.
Nov 29 03:26:19 np0005539552 podman[290820]: 2025-11-29 08:26:19.686564772 +0000 UTC m=+0.024625894 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:19 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:26:19 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecd38971edec5433ccadd379d05272e0e1381bc47b1a56343d1fde220745b45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:19 np0005539552 podman[290820]: 2025-11-29 08:26:19.814550505 +0000 UTC m=+0.152611617 container init 718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:26:19 np0005539552 podman[290820]: 2025-11-29 08:26:19.822438558 +0000 UTC m=+0.160499650 container start 718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:26:19 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [NOTICE]   (290875) : New worker (290881) forked
Nov 29 03:26:19 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [NOTICE]   (290875) : Loading success.
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.915 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404779.9144452, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.916 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.918 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.922 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.926 233728 INFO nova.virt.libvirt.driver [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance spawned successfully.#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.926 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.938 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.941 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.954 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.954 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.955 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.955 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.955 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.956 233728 DEBUG nova.virt.libvirt.driver [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.959 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.959 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404779.9147482, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.959 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:19.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.993 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.997 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404779.9211414, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:19 np0005539552 nova_compute[233724]: 2025-11-29 08:26:19.997 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2084618336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.031 233728 INFO nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Took 5.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.032 233728 DEBUG nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.035 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.036 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.046 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.051 233728 DEBUG nova.compute.provider_tree [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.088 233728 DEBUG nova.scheduler.client.report [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.093 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.125 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.126 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.141 233728 INFO nova.compute.manager [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Took 6.92 seconds to build instance.#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.161 233728 DEBUG oslo_concurrency.lockutils [None req-47110656-6df7-4d15-adea-930000ea3f91 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.183 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.183 233728 DEBUG nova.network.neutron [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.217 233728 INFO nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.252 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.341 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.342 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.343 233728 INFO nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Creating image(s)#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.380 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.437 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.478 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.483 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.520 233728 DEBUG nova.policy [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283f8136265e4425a5a31f840935b9ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.570 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.571 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.571 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.572 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.602 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.607 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bb261893-bfa1-4fdc-9c11-a33a733337ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:20.632 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:20.634 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:20 np0005539552 nova_compute[233724]: 2025-11-29 08:26:20.926 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bb261893-bfa1-4fdc-9c11-a33a733337ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:20.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.017 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] resizing rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.137 233728 DEBUG nova.objects.instance [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'migration_context' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.157 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.157 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Ensure instance console log exists: /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.158 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.158 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.158 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.303 233728 DEBUG nova.network.neutron [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Successfully created port: 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.424 233728 DEBUG nova.compute.manager [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.425 233728 DEBUG oslo_concurrency.lockutils [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.426 233728 DEBUG oslo_concurrency.lockutils [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.426 233728 DEBUG oslo_concurrency.lockutils [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.426 233728 DEBUG nova.compute.manager [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.426 233728 WARNING nova.compute.manager [req-fc6b4a33-8491-41bf-bb4b-73cbb8c3a9d6 req-e8b0d35e-b627-479b-9692-15695fe580ba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:21 np0005539552 nova_compute[233724]: 2025-11-29 08:26:21.542 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:21.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.144 233728 DEBUG nova.network.neutron [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Successfully updated port: 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.159 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.159 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.160 233728 DEBUG nova.network.neutron [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.297 233728 DEBUG nova.compute.manager [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-changed-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.298 233728 DEBUG nova.compute.manager [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Refreshing instance network info cache due to event network-changed-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.298 233728 DEBUG oslo_concurrency.lockutils [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.374 233728 DEBUG nova.network.neutron [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:26:22 np0005539552 nova_compute[233724]: 2025-11-29 08:26:22.958 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:22.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:23 np0005539552 NetworkManager[48926]: <info>  [1764404783.3235] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Nov 29 03:26:23 np0005539552 nova_compute[233724]: 2025-11-29 08:26:23.324 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539552 NetworkManager[48926]: <info>  [1764404783.3252] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Nov 29 03:26:23 np0005539552 nova_compute[233724]: 2025-11-29 08:26:23.467 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:23Z|00606|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:23 np0005539552 nova_compute[233724]: 2025-11-29 08:26:23.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:23.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.560 233728 DEBUG nova.network.neutron [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updating instance_info_cache with network_info: [{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.584 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.584 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance network_info: |[{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.585 233728 DEBUG oslo_concurrency.lockutils [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.585 233728 DEBUG nova.network.neutron [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Refreshing network info cache for port 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.590 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Start _get_guest_xml network_info=[{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.596 233728 DEBUG nova.compute.manager [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.597 233728 DEBUG nova.compute.manager [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing instance network info cache due to event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.597 233728 DEBUG oslo_concurrency.lockutils [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.597 233728 DEBUG oslo_concurrency.lockutils [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.598 233728 DEBUG nova.network.neutron [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.601 233728 WARNING nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.609 233728 DEBUG nova.virt.libvirt.host [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.610 233728 DEBUG nova.virt.libvirt.host [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.619 233728 DEBUG nova.virt.libvirt.host [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.620 233728 DEBUG nova.virt.libvirt.host [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.621 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.622 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.623 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.623 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.623 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.624 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.624 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.625 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.625 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.625 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.626 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.626 233728 DEBUG nova.virt.hardware [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:24 np0005539552 nova_compute[233724]: 2025-11-29 08:26:24.630 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:24.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/941562522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.096 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.121 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.125 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1325609072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.575 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.577 233728 DEBUG nova.virt.libvirt.vif [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1410922991',display_name='tempest-ServerRescueNegativeTestJSON-server-1410922991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1410922991',id=140,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-u00jfp1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:20Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=bb261893-bfa1-4fdc-9c11-a33a733337ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.578 233728 DEBUG nova.network.os_vif_util [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.579 233728 DEBUG nova.network.os_vif_util [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.580 233728 DEBUG nova.objects.instance [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'pci_devices' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.604 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <uuid>bb261893-bfa1-4fdc-9c11-a33a733337ce</uuid>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <name>instance-0000008c</name>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1410922991</nova:name>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:26:24</nova:creationTime>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:user uuid="283f8136265e4425a5a31f840935b9ab">tempest-ServerRescueNegativeTestJSON-2045177058-project-member</nova:user>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:project uuid="ea7b24ea9d7b4d239b4741634ac3f10c">tempest-ServerRescueNegativeTestJSON-2045177058</nova:project>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <nova:port uuid="0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <entry name="serial">bb261893-bfa1-4fdc-9c11-a33a733337ce</entry>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <entry name="uuid">bb261893-bfa1-4fdc-9c11-a33a733337ce</entry>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/bb261893-bfa1-4fdc-9c11-a33a733337ce_disk">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:72:4b:49"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <target dev="tap0e2e8c1e-12"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/console.log" append="off"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:26:25 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:26:25 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:26:25 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:26:25 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.610 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Preparing to wait for external event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.734 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.735 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.735 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.735 233728 DEBUG nova.virt.libvirt.vif [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1410922991',display_name='tempest-ServerRescueNegativeTestJSON-server-1410922991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1410922991',id=140,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-u00jfp1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:20Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=bb261893-bfa1-4fdc-9c11-a33a733337ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.736 233728 DEBUG nova.network.os_vif_util [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.736 233728 DEBUG nova.network.os_vif_util [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.737 233728 DEBUG os_vif [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.737 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.738 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.738 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.740 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.741 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e2e8c1e-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.741 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e2e8c1e-12, col_values=(('external_ids', {'iface-id': '0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:4b:49', 'vm-uuid': 'bb261893-bfa1-4fdc-9c11-a33a733337ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.742 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:25 np0005539552 NetworkManager[48926]: <info>  [1764404785.7434] manager: (tap0e2e8c1e-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.744 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.749 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.750 233728 INFO os_vif [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12')#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.780 233728 DEBUG nova.network.neutron [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updated VIF entry in instance network info cache for port 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.781 233728 DEBUG nova.network.neutron [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updating instance_info_cache with network_info: [{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.783 233728 DEBUG nova.network.neutron [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updated VIF entry in instance network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.783 233728 DEBUG nova.network.neutron [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.812 233728 DEBUG oslo_concurrency.lockutils [req-a00541a9-b78f-4a6c-8a57-98f78b2bb28f req-b1c43dd1-f850-4c27-b844-8229e2886d55 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.817 233728 DEBUG oslo_concurrency.lockutils [req-73a548a4-ffe1-4e96-a61a-c46314dd6731 req-b0d63781-2648-4b2f-9052-a4db70c8058e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.835 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.836 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.836 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:72:4b:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.837 233728 INFO nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Using config drive#033[00m
Nov 29 03:26:25 np0005539552 nova_compute[233724]: 2025-11-29 08:26:25.873 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.171 233728 INFO nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Creating config drive at /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.177 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpov9ob611 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.308 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpov9ob611" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.342 233728 DEBUG nova.storage.rbd_utils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.347 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.544 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.554 233728 DEBUG oslo_concurrency.processutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.555 233728 INFO nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Deleting local config drive /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config because it was imported into RBD.#033[00m
Nov 29 03:26:26 np0005539552 kernel: tap0e2e8c1e-12: entered promiscuous mode
Nov 29 03:26:26 np0005539552 NetworkManager[48926]: <info>  [1764404786.6203] manager: (tap0e2e8c1e-12): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.622 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:26Z|00607|binding|INFO|Claiming lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for this chassis.
Nov 29 03:26:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:26Z|00608|binding|INFO|0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d: Claiming fa:16:3e:72:4b:49 10.100.0.13
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.628 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:4b:49 10.100.0.13'], port_security=['fa:16:3e:72:4b:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb261893-bfa1-4fdc-9c11-a33a733337ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.629 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.630 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.645 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[88c82d0e-cff7-4791-b116-7a3d17ce9aef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.647 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca67fce-61 in ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:26 np0005539552 systemd-udevd[291378]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:26Z|00609|binding|INFO|Setting lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d ovn-installed in OVS
Nov 29 03:26:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:26Z|00610|binding|INFO|Setting lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d up in Southbound
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.653 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca67fce-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.654 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3ce976-de20-4c71-bfa0-5796f0e7dfe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.655 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.657 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6071af6a-51ad-4c19-88ed-acfb7a39e435]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.658 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539552 NetworkManager[48926]: <info>  [1764404786.6719] device (tap0e2e8c1e-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:26 np0005539552 NetworkManager[48926]: <info>  [1764404786.6727] device (tap0e2e8c1e-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:26 np0005539552 systemd-machined[196379]: New machine qemu-61-instance-0000008c.
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.685 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[86252152-c6e4-42a8-988f-3278b83b11ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.701 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[67639dd5-c79c-4d02-aec4-121d13f794c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 systemd[1]: Started Virtual Machine qemu-61-instance-0000008c.
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.737 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[17e2ebb8-e658-4bb5-9e4d-449801e6d8b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 NetworkManager[48926]: <info>  [1764404786.7469] manager: (tap4ca67fce-60): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.745 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9008df7c-80d3-45e6-bee0-7fc78b1c6953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.789 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[02696ab4-d0bc-41d8-b763-d8866dd631b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.793 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3f62020d-7fd3-407e-b6f5-3b56571a7067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 NetworkManager[48926]: <info>  [1764404786.8212] device (tap4ca67fce-60): carrier: link connected
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.827 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a29a3455-01bb-40e6-8256-90413c6ba88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.848 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6101a521-d6ff-4334-9320-93b6eeac9926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783250, 'reachable_time': 28241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291414, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.864 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7944786e-a972-4efd-a0e6-a5f530411cc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:3cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783250, 'tstamp': 783250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291415, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.882 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[db4bfb22-106f-485f-bf4d-ced784e3f487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783250, 'reachable_time': 28241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291416, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.920 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f027f111-ebc4-448b-8578-bd993a05cc04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:26.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.993 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6c156a-b7eb-4e09-8045-03f7c6213981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.994 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.995 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:26.995 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:26 np0005539552 nova_compute[233724]: 2025-11-29 08:26:26.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:26 np0005539552 NetworkManager[48926]: <info>  [1764404786.9973] manager: (tap4ca67fce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Nov 29 03:26:26 np0005539552 kernel: tap4ca67fce-60: entered promiscuous mode
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:27.000 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:27Z|00611|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.035 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:27.036 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:27.037 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9b59ad-6b8f-49c6-82a4-95aa6ed25e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:27.038 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:27.038 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'env', 'PROCESS_TAG=haproxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.412 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404787.4118433, bb261893-bfa1-4fdc-9c11-a33a733337ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.413 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.433 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.437 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404787.4121017, bb261893-bfa1-4fdc-9c11-a33a733337ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.437 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:27 np0005539552 podman[291489]: 2025-11-29 08:26:27.446053496 +0000 UTC m=+0.063269693 container create b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.453 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.458 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.477 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:27 np0005539552 systemd[1]: Started libpod-conmon-b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247.scope.
Nov 29 03:26:27 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:26:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6416ace80f0ed23d2777ce894f83179c7928266e3cdaf24b66bd57a7752ab3af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:27 np0005539552 podman[291489]: 2025-11-29 08:26:27.419261615 +0000 UTC m=+0.036477802 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:27 np0005539552 podman[291489]: 2025-11-29 08:26:27.526783808 +0000 UTC m=+0.143999995 container init b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:27 np0005539552 podman[291489]: 2025-11-29 08:26:27.536589392 +0000 UTC m=+0.153805559 container start b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.578 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:27 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [NOTICE]   (291509) : New worker (291511) forked
Nov 29 03:26:27 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [NOTICE]   (291509) : Loading success.
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.785 233728 DEBUG nova.compute.manager [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.785 233728 DEBUG oslo_concurrency.lockutils [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.786 233728 DEBUG oslo_concurrency.lockutils [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.786 233728 DEBUG oslo_concurrency.lockutils [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.786 233728 DEBUG nova.compute.manager [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Processing event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.786 233728 DEBUG nova.compute.manager [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.787 233728 DEBUG oslo_concurrency.lockutils [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.787 233728 DEBUG oslo_concurrency.lockutils [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.787 233728 DEBUG oslo_concurrency.lockutils [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.789 233728 DEBUG nova.compute.manager [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.789 233728 WARNING nova.compute.manager [req-1958312c-0239-46dc-8ee9-56ad505335da req-5819ee13-b336-488b-b181-a96ade5eeb60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received unexpected event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.790 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.800 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404787.7930064, bb261893-bfa1-4fdc-9c11-a33a733337ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.802 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.803 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.815 233728 INFO nova.virt.libvirt.driver [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance spawned successfully.#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.816 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.828 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.834 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.840 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.840 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.841 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.842 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.842 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.843 233728 DEBUG nova.virt.libvirt.driver [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.869 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.899 233728 INFO nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Took 7.56 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.900 233728 DEBUG nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.960 233728 INFO nova.compute.manager [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Took 8.71 seconds to build instance.#033[00m
Nov 29 03:26:27 np0005539552 nova_compute[233724]: 2025-11-29 08:26:27.975 233728 DEBUG oslo_concurrency.lockutils [None req-c4e2f639-3535-4987-9e49-982ea0a0e682 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:28.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:30 np0005539552 nova_compute[233724]: 2025-11-29 08:26:30.680 233728 INFO nova.compute.manager [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Rescuing#033[00m
Nov 29 03:26:30 np0005539552 nova_compute[233724]: 2025-11-29 08:26:30.681 233728 DEBUG oslo_concurrency.lockutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:30 np0005539552 nova_compute[233724]: 2025-11-29 08:26:30.682 233728 DEBUG oslo_concurrency.lockutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:30 np0005539552 nova_compute[233724]: 2025-11-29 08:26:30.682 233728 DEBUG nova.network.neutron [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:30 np0005539552 nova_compute[233724]: 2025-11-29 08:26:30.743 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:31 np0005539552 nova_compute[233724]: 2025-11-29 08:26:31.546 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:32.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:32 np0005539552 nova_compute[233724]: 2025-11-29 08:26:32.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:32 np0005539552 nova_compute[233724]: 2025-11-29 08:26:32.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:32 np0005539552 nova_compute[233724]: 2025-11-29 08:26:32.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:32 np0005539552 nova_compute[233724]: 2025-11-29 08:26:32.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:32 np0005539552 nova_compute[233724]: 2025-11-29 08:26:32.950 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:26:32 np0005539552 nova_compute[233724]: 2025-11-29 08:26:32.951 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:32.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.353 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/608272158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.383 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.482 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.483 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.487 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.488 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.684 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.685 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4003MB free_disk=20.743305206298828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.686 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.686 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.813 233728 DEBUG nova.network.neutron [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updating instance_info_cache with network_info: [{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.818 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 50daa6f5-6598-439f-a542-38e8ae7aded0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance bb261893-bfa1-4fdc-9c11-a33a733337ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.846 233728 DEBUG oslo_concurrency.lockutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:33 np0005539552 nova_compute[233724]: 2025-11-29 08:26:33.878 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:34Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:14:96 10.100.0.4
Nov 29 03:26:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:34Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:14:96 10.100.0.4
Nov 29 03:26:34 np0005539552 nova_compute[233724]: 2025-11-29 08:26:34.198 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:26:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3223347134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:34 np0005539552 nova_compute[233724]: 2025-11-29 08:26:34.348 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:34 np0005539552 nova_compute[233724]: 2025-11-29 08:26:34.356 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:34 np0005539552 nova_compute[233724]: 2025-11-29 08:26:34.383 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:34 np0005539552 nova_compute[233724]: 2025-11-29 08:26:34.409 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:26:34 np0005539552 nova_compute[233724]: 2025-11-29 08:26:34.410 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:35.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:35 np0005539552 nova_compute[233724]: 2025-11-29 08:26:35.746 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:36 np0005539552 nova_compute[233724]: 2025-11-29 08:26:36.549 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539552 nova_compute[233724]: 2025-11-29 08:26:36.960 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:36 np0005539552 nova_compute[233724]: 2025-11-29 08:26:36.960 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:36 np0005539552 nova_compute[233724]: 2025-11-29 08:26:36.981 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:26:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:37.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.049 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.049 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.060 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.060 233728 INFO nova.compute.claims [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.229 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/456286975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.691 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.697 233728 DEBUG nova.compute.provider_tree [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.728 233728 DEBUG nova.scheduler.client.report [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.755 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.756 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.814 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.814 233728 DEBUG nova.network.neutron [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.830 233728 INFO nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.869 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.963 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.964 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.965 233728 INFO nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Creating image(s)#033[00m
Nov 29 03:26:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:37 np0005539552 nova_compute[233724]: 2025-11-29 08:26:37.990 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.020 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.049 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.052 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.118 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.119 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.120 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.120 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.145 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.148 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d2f10f56-bc70-4ac8-953c-99479942f88d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.410 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.412 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.412 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.427 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d2f10f56-bc70-4ac8-953c-99479942f88d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.516 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] resizing rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.575 233728 DEBUG nova.policy [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac7e861322894ae387d8f7062a73dddb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14c2056117984f1e86ae3f64a5fee48c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.659 233728 DEBUG nova.objects.instance [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lazy-loading 'migration_context' on Instance uuid d2f10f56-bc70-4ac8-953c-99479942f88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.673 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.673 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Ensure instance console log exists: /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.673 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.674 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.674 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539552 nova_compute[233724]: 2025-11-29 08:26:38.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:26:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292572545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:26:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:26:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292572545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:26:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:39.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:39 np0005539552 nova_compute[233724]: 2025-11-29 08:26:39.535 233728 DEBUG nova.network.neutron [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Successfully created port: 82c2db07-7de0-4b4c-920a-4a2b41cf480a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:26:39 np0005539552 nova_compute[233724]: 2025-11-29 08:26:39.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:39 np0005539552 nova_compute[233724]: 2025-11-29 08:26:39.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:40.470 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:40.472 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.686 233728 DEBUG nova.compute.manager [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.748 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.760 233728 INFO nova.compute.manager [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] instance snapshotting#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.967 233728 DEBUG nova.network.neutron [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Successfully updated port: 82c2db07-7de0-4b4c-920a-4a2b41cf480a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.988 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "refresh_cache-d2f10f56-bc70-4ac8-953c-99479942f88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.989 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquired lock "refresh_cache-d2f10f56-bc70-4ac8-953c-99479942f88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:40 np0005539552 nova_compute[233724]: 2025-11-29 08:26:40.989 233728 DEBUG nova.network.neutron [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:41.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.018 233728 INFO nova.virt.libvirt.driver [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Beginning live snapshot process#033[00m
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.078 233728 DEBUG nova.compute.manager [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-changed-82c2db07-7de0-4b4c-920a-4a2b41cf480a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.078 233728 DEBUG nova.compute.manager [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Refreshing instance network info cache due to event network-changed-82c2db07-7de0-4b4c-920a-4a2b41cf480a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.079 233728 DEBUG oslo_concurrency.lockutils [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d2f10f56-bc70-4ac8-953c-99479942f88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.162 233728 DEBUG nova.virt.libvirt.imagebackend [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.343 233728 DEBUG nova.storage.rbd_utils [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(dcd1759ac55d4b21b641cd090a8f619c) on rbd image(50daa6f5-6598-439f-a542-38e8ae7aded0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.477 233728 DEBUG nova.network.neutron [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:26:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:41Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:4b:49 10.100.0.13
Nov 29 03:26:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:41Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:4b:49 10.100.0.13
Nov 29 03:26:41 np0005539552 nova_compute[233724]: 2025-11-29 08:26:41.550 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:42.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.292 233728 DEBUG nova.storage.rbd_utils [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning vms/50daa6f5-6598-439f-a542-38e8ae7aded0_disk@dcd1759ac55d4b21b641cd090a8f619c to images/f5aee037-dd13-47e6-81af-147c317d7457 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.327 233728 DEBUG nova.network.neutron [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Updating instance_info_cache with network_info: [{"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.377 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Releasing lock "refresh_cache-d2f10f56-bc70-4ac8-953c-99479942f88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.377 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Instance network_info: |[{"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.378 233728 DEBUG oslo_concurrency.lockutils [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d2f10f56-bc70-4ac8-953c-99479942f88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.378 233728 DEBUG nova.network.neutron [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Refreshing network info cache for port 82c2db07-7de0-4b4c-920a-4a2b41cf480a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.382 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Start _get_guest_xml network_info=[{"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.385 233728 WARNING nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.390 233728 DEBUG nova.virt.libvirt.host [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.391 233728 DEBUG nova.virt.libvirt.host [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.393 233728 DEBUG nova.virt.libvirt.host [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.393 233728 DEBUG nova.virt.libvirt.host [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.394 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.394 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.395 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.395 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.396 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.396 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.396 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.396 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.397 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.397 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.397 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.397 233728 DEBUG nova.virt.hardware [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.400 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.437 233728 DEBUG nova.storage.rbd_utils [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] flattening images/f5aee037-dd13-47e6-81af-147c317d7457 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:26:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2199214613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.848 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.873 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.877 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:26:42 np0005539552 nova_compute[233724]: 2025-11-29 08:26:42.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:26:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:43.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.113 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.114 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.115 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.115 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.115 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.193 233728 DEBUG nova.storage.rbd_utils [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] removing snapshot(dcd1759ac55d4b21b641cd090a8f619c) on rbd image(50daa6f5-6598-439f-a542-38e8ae7aded0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:26:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2761593480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:43Z|00612|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:43Z|00613|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.422 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.423 233728 DEBUG nova.virt.libvirt.vif [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-864046135',display_name='tempest-ServerPasswordTestJSON-server-864046135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-864046135',id=141,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14c2056117984f1e86ae3f64a5fee48c',ramdisk_id='',reservation_id='r-yg68sbzt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-984232141',owner_user_name='tempest-ServerPasswordTestJSON-984232141-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:37Z,user_data=None,user_id='ac7e861322894ae387d8f7062a73dddb',uuid=d2f10f56-bc70-4ac8-953c-99479942f88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.423 233728 DEBUG nova.network.os_vif_util [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Converting VIF {"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.424 233728 DEBUG nova.network.os_vif_util [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.425 233728 DEBUG nova.objects.instance [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lazy-loading 'pci_devices' on Instance uuid d2f10f56-bc70-4ac8-953c-99479942f88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.454 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <uuid>d2f10f56-bc70-4ac8-953c-99479942f88d</uuid>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <name>instance-0000008d</name>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerPasswordTestJSON-server-864046135</nova:name>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:26:42</nova:creationTime>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:user uuid="ac7e861322894ae387d8f7062a73dddb">tempest-ServerPasswordTestJSON-984232141-project-member</nova:user>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:project uuid="14c2056117984f1e86ae3f64a5fee48c">tempest-ServerPasswordTestJSON-984232141</nova:project>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <nova:port uuid="82c2db07-7de0-4b4c-920a-4a2b41cf480a">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <entry name="serial">d2f10f56-bc70-4ac8-953c-99479942f88d</entry>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <entry name="uuid">d2f10f56-bc70-4ac8-953c-99479942f88d</entry>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d2f10f56-bc70-4ac8-953c-99479942f88d_disk">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d2f10f56-bc70-4ac8-953c-99479942f88d_disk.config">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:ad:cb:50"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <target dev="tap82c2db07-7d"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/console.log" append="off"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:26:43 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:26:43 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:26:43 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:26:43 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.456 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Preparing to wait for external event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.456 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.457 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.457 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.458 233728 DEBUG nova.virt.libvirt.vif [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-864046135',display_name='tempest-ServerPasswordTestJSON-server-864046135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-864046135',id=141,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14c2056117984f1e86ae3f64a5fee48c',ramdisk_id='',reservation_id='r-yg68sbzt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-984232141',owner_user_name='tempest-ServerPasswordTestJSON-984232141-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:37Z,user_data=None,user_id='ac7e861322894ae387d8f7062a73dddb',uuid=d2f10f56-bc70-4ac8-953c-99479942f88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.458 233728 DEBUG nova.network.os_vif_util [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Converting VIF {"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.459 233728 DEBUG nova.network.os_vif_util [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.459 233728 DEBUG os_vif [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.461 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.461 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.463 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.464 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c2db07-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.464 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82c2db07-7d, col_values=(('external_ids', {'iface-id': '82c2db07-7de0-4b4c-920a-4a2b41cf480a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:cb:50', 'vm-uuid': 'd2f10f56-bc70-4ac8-953c-99479942f88d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539552 NetworkManager[48926]: <info>  [1764404803.4670] manager: (tap82c2db07-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:43.473 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.493 233728 INFO os_vif [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d')#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.494 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.800 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.800 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.801 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] No VIF found with MAC fa:16:3e:ad:cb:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.802 233728 INFO nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Using config drive#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.832 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.973 233728 DEBUG nova.network.neutron [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Updated VIF entry in instance network info cache for port 82c2db07-7de0-4b4c-920a-4a2b41cf480a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:43 np0005539552 nova_compute[233724]: 2025-11-29 08:26:43.974 233728 DEBUG nova.network.neutron [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Updating instance_info_cache with network_info: [{"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.000 233728 DEBUG oslo_concurrency.lockutils [req-75820ceb-4db0-41b2-9909-3f152da2a6d1 req-becafabb-be50-43cf-99c2-76015de0c4e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d2f10f56-bc70-4ac8-953c-99479942f88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.243 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.414 233728 INFO nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Creating config drive at /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/disk.config#033[00m
Nov 29 03:26:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.418 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2exjpxz3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.456 233728 DEBUG nova.storage.rbd_utils [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(snap) on rbd image(f5aee037-dd13-47e6-81af-147c317d7457) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.553 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2exjpxz3" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.580 233728 DEBUG nova.storage.rbd_utils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] rbd image d2f10f56-bc70-4ac8-953c-99479942f88d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.583 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/disk.config d2f10f56-bc70-4ac8-953c-99479942f88d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.735 233728 DEBUG oslo_concurrency.processutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/disk.config d2f10f56-bc70-4ac8-953c-99479942f88d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.736 233728 INFO nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Deleting local config drive /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:26:44 np0005539552 NetworkManager[48926]: <info>  [1764404804.7831] manager: (tap82c2db07-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 29 03:26:44 np0005539552 kernel: tap82c2db07-7d: entered promiscuous mode
Nov 29 03:26:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:44Z|00614|binding|INFO|Claiming lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a for this chassis.
Nov 29 03:26:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:44Z|00615|binding|INFO|82c2db07-7de0-4b4c-920a-4a2b41cf480a: Claiming fa:16:3e:ad:cb:50 10.100.0.13
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.785 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:44Z|00616|binding|INFO|Setting lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a ovn-installed in OVS
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.805 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539552 systemd-udevd[292140]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:44 np0005539552 systemd-machined[196379]: New machine qemu-62-instance-0000008d.
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.809 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539552 NetworkManager[48926]: <info>  [1764404804.8192] device (tap82c2db07-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:44 np0005539552 NetworkManager[48926]: <info>  [1764404804.8205] device (tap82c2db07-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:44 np0005539552 systemd[1]: Started Virtual Machine qemu-62-instance-0000008d.
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.845 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:44Z|00617|binding|INFO|Setting lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a up in Southbound
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.866 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:cb:50 10.100.0.13'], port_security=['fa:16:3e:ad:cb:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd2f10f56-bc70-4ac8-953c-99479942f88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14c2056117984f1e86ae3f64a5fee48c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ecddd71e-3e5b-4520-9036-a04a23fee489', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19148241-af59-40d5-827c-1361c53a843d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=82c2db07-7de0-4b4c-920a-4a2b41cf480a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.867 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 82c2db07-7de0-4b4c-920a-4a2b41cf480a in datapath 8e5589bf-54e6-4aa1-8daf-200d8f338eb1 bound to our chassis#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.868 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e5589bf-54e6-4aa1-8daf-200d8f338eb1#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.880 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[afdce4c1-9174-46ac-b844-1edb90a2502d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.881 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e5589bf-51 in ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.881 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:44 np0005539552 nova_compute[233724]: 2025-11-29 08:26:44.881 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.883 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e5589bf-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.883 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a5450d63-20f3-47f2-b31d-d5c43e261818]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.884 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d66775b7-8239-429c-a32c-9fe65767364c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.904 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc555a8-31fa-450c-b01c-a16f743b36c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.919 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4471418d-7660-4006-b785-1f59a9ebbdc0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.948 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff464b0-cccc-4951-84b8-d47ab0d7711f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 NetworkManager[48926]: <info>  [1764404804.9614] manager: (tap8e5589bf-50): new Veth device (/org/freedesktop/NetworkManager/Devices/282)
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.963 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8266aa77-3097-42f9-8c89-e66356941bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539552 systemd-udevd[292142]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:44.998 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d1aa8141-9e75-4069-92fe-76b40206ec07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.001 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6a838a1a-4606-4636-ad09-1b27aba821fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:45.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:45 np0005539552 NetworkManager[48926]: <info>  [1764404805.0251] device (tap8e5589bf-50): carrier: link connected
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.031 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e6df0367-7ab5-414c-8c3a-73ddbe061f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.047 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[65bdf298-4bec-4fb5-8160-e4e5116b2c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e5589bf-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:8d:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785071, 'reachable_time': 44044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292177, 'error': None, 'target': 'ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.063 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[110414a3-1f49-49a5-ba26-03268162de9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:8d09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785071, 'tstamp': 785071}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292178, 'error': None, 'target': 'ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.088 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba2ff82-7384-4b95-9621-1e7771508087]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e5589bf-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:8d:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785071, 'reachable_time': 44044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292179, 'error': None, 'target': 'ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.117 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3bf81b-7deb-4bc3-8abf-19d76711e1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.143 233728 DEBUG nova.compute.manager [req-8abbc65e-dda5-4407-9aaf-6271ac36a287 req-64ec0576-3608-441c-b99b-ce75f63a47be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.144 233728 DEBUG oslo_concurrency.lockutils [req-8abbc65e-dda5-4407-9aaf-6271ac36a287 req-64ec0576-3608-441c-b99b-ce75f63a47be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.144 233728 DEBUG oslo_concurrency.lockutils [req-8abbc65e-dda5-4407-9aaf-6271ac36a287 req-64ec0576-3608-441c-b99b-ce75f63a47be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.144 233728 DEBUG oslo_concurrency.lockutils [req-8abbc65e-dda5-4407-9aaf-6271ac36a287 req-64ec0576-3608-441c-b99b-ce75f63a47be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.144 233728 DEBUG nova.compute.manager [req-8abbc65e-dda5-4407-9aaf-6271ac36a287 req-64ec0576-3608-441c-b99b-ce75f63a47be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Processing event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.178 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c684e1b3-ad9a-4323-8a1d-b44865bfd8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.180 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e5589bf-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.180 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.180 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e5589bf-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.182 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:45 np0005539552 NetworkManager[48926]: <info>  [1764404805.1831] manager: (tap8e5589bf-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Nov 29 03:26:45 np0005539552 kernel: tap8e5589bf-50: entered promiscuous mode
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.185 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.191 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e5589bf-50, col_values=(('external_ids', {'iface-id': 'a3e3e756-6155-4eb7-a0d7-0fd7b56e7cdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.192 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:45Z|00618|binding|INFO|Releasing lport a3e3e756-6155-4eb7-a0d7-0fd7b56e7cdb from this chassis (sb_readonly=0)
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.206 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.207 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e5589bf-54e6-4aa1-8daf-200d8f338eb1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e5589bf-54e6-4aa1-8daf-200d8f338eb1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.208 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79f29424-75ee-4ecb-89ad-0db975d7f156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.209 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-8e5589bf-54e6-4aa1-8daf-200d8f338eb1
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/8e5589bf-54e6-4aa1-8daf-200d8f338eb1.pid.haproxy
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 8e5589bf-54e6-4aa1-8daf-200d8f338eb1
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:45.209 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'env', 'PROCESS_TAG=haproxy-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e5589bf-54e6-4aa1-8daf-200d8f338eb1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.519 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404805.518886, d2f10f56-bc70-4ac8-953c-99479942f88d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.520 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.522 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.525 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.528 233728 INFO nova.virt.libvirt.driver [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Instance spawned successfully.#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.529 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:26:45 np0005539552 podman[292252]: 2025-11-29 08:26:45.548715287 +0000 UTC m=+0.034050098 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.652 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.659 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.660 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.661 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.662 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.663 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.664 233728 DEBUG nova.virt.libvirt.driver [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.671 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 03:26:45 np0005539552 podman[292252]: 2025-11-29 08:26:45.76479403 +0000 UTC m=+0.250128851 container create 09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.780 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.782 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404805.5193202, d2f10f56-bc70-4ac8-953c-99479942f88d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.782 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:45 np0005539552 systemd[1]: Started libpod-conmon-09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5.scope.
Nov 29 03:26:45 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:26:45 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f55eaf3bfa757bece0d3cd04f3b655ace4acd94a96196f6d870022fa4d8ed3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:45 np0005539552 podman[292252]: 2025-11-29 08:26:45.942771459 +0000 UTC m=+0.428106270 container init 09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.945 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:45 np0005539552 podman[292252]: 2025-11-29 08:26:45.948055251 +0000 UTC m=+0.433390032 container start 09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.952 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404805.5250335, d2f10f56-bc70-4ac8-953c-99479942f88d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.953 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.959 233728 INFO nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:26:45 np0005539552 nova_compute[233724]: 2025-11-29 08:26:45.960 233728 DEBUG nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:45 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [NOTICE]   (292270) : New worker (292272) forked
Nov 29 03:26:45 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [NOTICE]   (292270) : Loading success.
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.013 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:46.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.018 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.056 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.090 233728 INFO nova.compute.manager [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Took 9.06 seconds to build instance.#033[00m
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.139 233728 DEBUG oslo_concurrency.lockutils [None req-2f7d811e-7334-4619-b4b3-48ca7f5de654 ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:46 np0005539552 kernel: tap0e2e8c1e-12 (unregistering): left promiscuous mode
Nov 29 03:26:46 np0005539552 NetworkManager[48926]: <info>  [1764404806.5428] device (tap0e2e8c1e-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:46Z|00619|binding|INFO|Releasing lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d from this chassis (sb_readonly=0)
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.551 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:46Z|00620|binding|INFO|Setting lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d down in Southbound
Nov 29 03:26:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:46Z|00621|binding|INFO|Removing iface tap0e2e8c1e-12 ovn-installed in OVS
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:46 np0005539552 nova_compute[233724]: 2025-11-29 08:26:46.575 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:46.593 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:4b:49 10.100.0.13'], port_security=['fa:16:3e:72:4b:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb261893-bfa1-4fdc-9c11-a33a733337ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:46.595 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:26:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:46.597 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:46.602 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0162a76b-3502-4451-8be9-dee0cafe52a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:46 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:46.603 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace which is not needed anymore#033[00m
Nov 29 03:26:46 np0005539552 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 29 03:26:46 np0005539552 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008c.scope: Consumed 14.465s CPU time.
Nov 29 03:26:46 np0005539552 systemd-machined[196379]: Machine qemu-61-instance-0000008c terminated.
Nov 29 03:26:46 np0005539552 podman[292286]: 2025-11-29 08:26:46.635690143 +0000 UTC m=+0.066687906 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:26:46 np0005539552 podman[292281]: 2025-11-29 08:26:46.641669724 +0000 UTC m=+0.071992418 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:26:46 np0005539552 podman[292287]: 2025-11-29 08:26:46.671004833 +0000 UTC m=+0.096994911 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:46 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [NOTICE]   (291509) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:46 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [NOTICE]   (291509) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:46 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [WARNING]  (291509) : Exiting Master process...
Nov 29 03:26:46 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [ALERT]    (291509) : Current worker (291511) exited with code 143 (Terminated)
Nov 29 03:26:46 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[291505]: [WARNING]  (291509) : All workers exited. Exiting... (0)
Nov 29 03:26:46 np0005539552 systemd[1]: libpod-b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247.scope: Deactivated successfully.
Nov 29 03:26:46 np0005539552 podman[292362]: 2025-11-29 08:26:46.781118146 +0000 UTC m=+0.076048778 container died b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:26:46 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:46 np0005539552 systemd[1]: var-lib-containers-storage-overlay-6416ace80f0ed23d2777ce894f83179c7928266e3cdaf24b66bd57a7752ab3af-merged.mount: Deactivated successfully.
Nov 29 03:26:46 np0005539552 podman[292362]: 2025-11-29 08:26:46.999580923 +0000 UTC m=+0.294511555 container cleanup b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:26:47 np0005539552 systemd[1]: libpod-conmon-b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247.scope: Deactivated successfully.
Nov 29 03:26:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:47.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:47 np0005539552 podman[292402]: 2025-11-29 08:26:47.166529584 +0000 UTC m=+0.142961897 container remove b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.177 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[08428f28-af13-4484-bd61-6c6f8528d569]: (4, ('Sat Nov 29 08:26:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247)\nb2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247\nSat Nov 29 08:26:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (b2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247)\nb2ac629770a22bfec7ecd23331cf03a39e457751f97ac74c57d543262d4fa247\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.181 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[258de8c6-f2c0-4f57-810a-e42fe97da3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.183 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:47 np0005539552 kernel: tap4ca67fce-60: left promiscuous mode
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.186 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.210 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.212 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b6c320-d450-4a95-88c9-54c16f7a2cd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.226 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a57e2e-8867-4ac5-be8c-5bdcdcb5f764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.229 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d4938c-88d3-41f9-8521-9dcd54db19c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.245 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e2aeba33-5dfa-45f9-bf39-b4e8f3811921]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783242, 'reachable_time': 33793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292420, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 systemd[1]: run-netns-ovnmeta\x2d4ca67fce\x2d6116\x2d4a0b\x2db0a9\x2dc25b5adaad19.mount: Deactivated successfully.
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.248 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:47.249 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[7c707472-5d7c-457c-ad75-0696e2271ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.258 233728 INFO nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.265 233728 INFO nova.virt.libvirt.driver [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance destroyed successfully.#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.265 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'numa_topology' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.329 233728 INFO nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Attempting rescue#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.331 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.336 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.337 233728 INFO nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Creating image(s)#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.363 233728 DEBUG nova.storage.rbd_utils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.367 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'trusted_certs' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.398 233728 DEBUG nova.compute.manager [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.399 233728 DEBUG oslo_concurrency.lockutils [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.399 233728 DEBUG oslo_concurrency.lockutils [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.399 233728 DEBUG oslo_concurrency.lockutils [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.399 233728 DEBUG nova.compute.manager [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] No waiting events found dispatching network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.400 233728 WARNING nova.compute.manager [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received unexpected event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.400 233728 DEBUG nova.compute.manager [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-unplugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.400 233728 DEBUG oslo_concurrency.lockutils [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.401 233728 DEBUG oslo_concurrency.lockutils [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.401 233728 DEBUG oslo_concurrency.lockutils [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.401 233728 DEBUG nova.compute.manager [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-unplugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.401 233728 WARNING nova.compute.manager [req-190fef35-cdbd-4761-8484-a6e35c9da4ca req-a606c388-93dd-4337-8c01-92540f4e2d31 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received unexpected event network-vif-unplugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.433 233728 DEBUG nova.storage.rbd_utils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.465 233728 DEBUG nova.storage.rbd_utils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.470 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.513 233728 INFO nova.virt.libvirt.driver [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Snapshot image upload complete#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.514 233728 INFO nova.compute.manager [None req-9b0bcd78-c8bd-42fc-b53b-d01798216896 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Took 6.75 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.550 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.551 233728 DEBUG oslo_concurrency.lockutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.551 233728 DEBUG oslo_concurrency.lockutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.552 233728 DEBUG oslo_concurrency.lockutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.581 233728 DEBUG nova.storage.rbd_utils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:47 np0005539552 nova_compute[233724]: 2025-11-29 08:26:47.585 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.206 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.208 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'migration_context' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.467 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.549 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.551 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Start _get_guest_xml network_info=[{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:72:4b:49"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4873db8c-b414-4e95-acd9-77caabebe722', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.551 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'resources' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.602 233728 WARNING nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.609 233728 DEBUG nova.virt.libvirt.host [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.610 233728 DEBUG nova.virt.libvirt.host [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.613 233728 DEBUG nova.virt.libvirt.host [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.614 233728 DEBUG nova.virt.libvirt.host [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.616 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.617 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.618 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.618 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.618 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.619 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.619 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.620 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.620 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.621 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.621 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.621 233728 DEBUG nova.virt.hardware [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.622 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'vcpu_model' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.682 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.721 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.723 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.723 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.724 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.725 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.727 233728 INFO nova.compute.manager [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Terminating instance#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.729 233728 DEBUG nova.compute.manager [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:26:48 np0005539552 kernel: tap82c2db07-7d (unregistering): left promiscuous mode
Nov 29 03:26:48 np0005539552 NetworkManager[48926]: <info>  [1764404808.7858] device (tap82c2db07-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00622|binding|INFO|Releasing lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a from this chassis (sb_readonly=0)
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00623|binding|INFO|Setting lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a down in Southbound
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00624|binding|INFO|Removing iface tap82c2db07-7d ovn-installed in OVS
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.803 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.818 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539552 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 29 03:26:48 np0005539552 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008d.scope: Consumed 3.951s CPU time.
Nov 29 03:26:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:48.843 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:cb:50 10.100.0.13'], port_security=['fa:16:3e:ad:cb:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd2f10f56-bc70-4ac8-953c-99479942f88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14c2056117984f1e86ae3f64a5fee48c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ecddd71e-3e5b-4520-9036-a04a23fee489', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19148241-af59-40d5-827c-1361c53a843d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=82c2db07-7de0-4b4c-920a-4a2b41cf480a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:48 np0005539552 systemd-machined[196379]: Machine qemu-62-instance-0000008d terminated.
Nov 29 03:26:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:48.845 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 82c2db07-7de0-4b4c-920a-4a2b41cf480a in datapath 8e5589bf-54e6-4aa1-8daf-200d8f338eb1 unbound from our chassis#033[00m
Nov 29 03:26:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:48.846 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e5589bf-54e6-4aa1-8daf-200d8f338eb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:48.847 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3840ae-22d3-4bb5-acbd-b5f85c9a1c59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:48.848 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1 namespace which is not needed anymore#033[00m
Nov 29 03:26:48 np0005539552 kernel: tap82c2db07-7d: entered promiscuous mode
Nov 29 03:26:48 np0005539552 NetworkManager[48926]: <info>  [1764404808.9490] manager: (tap82c2db07-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Nov 29 03:26:48 np0005539552 kernel: tap82c2db07-7d (unregistering): left promiscuous mode
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00625|binding|INFO|Claiming lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a for this chassis.
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00626|binding|INFO|82c2db07-7de0-4b4c-920a-4a2b41cf480a: Claiming fa:16:3e:ad:cb:50 10.100.0.13
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.975 233728 INFO nova.virt.libvirt.driver [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Instance destroyed successfully.#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.975 233728 DEBUG nova.objects.instance [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lazy-loading 'resources' on Instance uuid d2f10f56-bc70-4ac8-953c-99479942f88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00627|if_status|INFO|Dropped 3 log messages in last 625 seconds (most recently, 625 seconds ago) due to excessive rate
Nov 29 03:26:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:48Z|00628|if_status|INFO|Not setting lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a down as sb is readonly
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.980 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539552 nova_compute[233724]: 2025-11-29 08:26:48.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [NOTICE]   (292270) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:49 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [NOTICE]   (292270) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:49 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [WARNING]  (292270) : Exiting Master process...
Nov 29 03:26:49 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [ALERT]    (292270) : Current worker (292272) exited with code 143 (Terminated)
Nov 29 03:26:49 np0005539552 neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1[292266]: [WARNING]  (292270) : All workers exited. Exiting... (0)
Nov 29 03:26:49 np0005539552 systemd[1]: libpod-09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5.scope: Deactivated successfully.
Nov 29 03:26:49 np0005539552 podman[292563]: 2025-11-29 08:26:49.012149382 +0000 UTC m=+0.081885964 container died 09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:26:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:49.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:49Z|00629|binding|INFO|Releasing lport 82c2db07-7de0-4b4c-920a-4a2b41cf480a from this chassis (sb_readonly=0)
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.086 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:cb:50 10.100.0.13'], port_security=['fa:16:3e:ad:cb:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd2f10f56-bc70-4ac8-953c-99479942f88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14c2056117984f1e86ae3f64a5fee48c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ecddd71e-3e5b-4520-9036-a04a23fee489', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19148241-af59-40d5-827c-1361c53a843d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=82c2db07-7de0-4b4c-920a-4a2b41cf480a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.089 233728 DEBUG nova.virt.libvirt.vif [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-864046135',display_name='tempest-ServerPasswordTestJSON-server-864046135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-864046135',id=141,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14c2056117984f1e86ae3f64a5fee48c',ramdisk_id='',reservation_id='r-yg68sbzt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-984232141',owner_user_name='tempest-ServerPasswordTestJSON-984232141-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:47Z,user_data=None,user_id='ac7e861322894ae387d8f7062a73dddb',uuid=d2f10f56-bc70-4ac8-953c-99479942f88d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.090 233728 DEBUG nova.network.os_vif_util [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Converting VIF {"id": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "address": "fa:16:3e:ad:cb:50", "network": {"id": "8e5589bf-54e6-4aa1-8daf-200d8f338eb1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-695845824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "14c2056117984f1e86ae3f64a5fee48c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82c2db07-7d", "ovs_interfaceid": "82c2db07-7de0-4b4c-920a-4a2b41cf480a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.090 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:cb:50 10.100.0.13'], port_security=['fa:16:3e:ad:cb:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd2f10f56-bc70-4ac8-953c-99479942f88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14c2056117984f1e86ae3f64a5fee48c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ecddd71e-3e5b-4520-9036-a04a23fee489', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19148241-af59-40d5-827c-1361c53a843d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=82c2db07-7de0-4b4c-920a-4a2b41cf480a) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.091 233728 DEBUG nova.network.os_vif_util [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.091 233728 DEBUG os_vif [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.093 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.093 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c2db07-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.095 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.098 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.100 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.105 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.108 233728 INFO os_vif [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:cb:50,bridge_name='br-int',has_traffic_filtering=True,id=82c2db07-7de0-4b4c-920a-4a2b41cf480a,network=Network(8e5589bf-54e6-4aa1-8daf-200d8f338eb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82c2db07-7d')#033[00m
Nov 29 03:26:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay-3f55eaf3bfa757bece0d3cd04f3b655ace4acd94a96196f6d870022fa4d8ed3e-merged.mount: Deactivated successfully.
Nov 29 03:26:49 np0005539552 podman[292563]: 2025-11-29 08:26:49.1320994 +0000 UTC m=+0.201835992 container cleanup 09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:26:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3415618587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:49 np0005539552 systemd[1]: libpod-conmon-09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5.scope: Deactivated successfully.
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.161 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.162 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:49 np0005539552 podman[292618]: 2025-11-29 08:26:49.223977722 +0000 UTC m=+0.066229723 container remove 09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.230 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dc11b5-f1b0-4467-939f-72e4993b5909]: (4, ('Sat Nov 29 08:26:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1 (09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5)\n09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5\nSat Nov 29 08:26:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1 (09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5)\n09d463fe03547231e64f49aa67fa5c6d0723b22445d5c1ba607943d098df24b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.236 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6a4fea-a6e1-4616-baa5-c9eb96832c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.238 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e5589bf-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.239 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 kernel: tap8e5589bf-50: left promiscuous mode
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.259 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.262 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[65261196-5446-48b7-baea-95c3e3cc524b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.275 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa9559e-7e7e-4a96-9abf-3d33b6c94fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.276 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae9f80c-cf47-4aae-ba45-3490053aa272]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.289 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[115d6414-5fd3-4086-9dba-f25b6685646c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785063, 'reachable_time': 26627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292638, 'error': None, 'target': 'ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.292 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e5589bf-54e6-4aa1-8daf-200d8f338eb1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.292 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[1b38ff6b-b95f-4f91-9da7-a873aabafb8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 systemd[1]: run-netns-ovnmeta\x2d8e5589bf\x2d54e6\x2d4aa1\x2d8daf\x2d200d8f338eb1.mount: Deactivated successfully.
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.293 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 82c2db07-7de0-4b4c-920a-4a2b41cf480a in datapath 8e5589bf-54e6-4aa1-8daf-200d8f338eb1 unbound from our chassis#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.294 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e5589bf-54e6-4aa1-8daf-200d8f338eb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.295 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0d7d02-ca23-490f-9595-2681e5cfeee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.295 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 82c2db07-7de0-4b4c-920a-4a2b41cf480a in datapath 8e5589bf-54e6-4aa1-8daf-200d8f338eb1 unbound from our chassis#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.296 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e5589bf-54e6-4aa1-8daf-200d8f338eb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:49.297 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a39fb6-c3c1-48e8-82e4-9a2cc3ceb4cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3048545471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.608 233728 DEBUG nova.compute.manager [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.609 233728 DEBUG oslo_concurrency.lockutils [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.609 233728 DEBUG oslo_concurrency.lockutils [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.609 233728 DEBUG oslo_concurrency.lockutils [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.609 233728 DEBUG nova.compute.manager [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.610 233728 WARNING nova.compute.manager [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received unexpected event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.610 233728 DEBUG nova.compute.manager [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-vif-unplugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.610 233728 DEBUG oslo_concurrency.lockutils [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.610 233728 DEBUG oslo_concurrency.lockutils [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.610 233728 DEBUG oslo_concurrency.lockutils [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.611 233728 DEBUG nova.compute.manager [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] No waiting events found dispatching network-vif-unplugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.611 233728 DEBUG nova.compute.manager [req-9ca5a595-7a72-44fd-9dec-d2310d2a47b6 req-609cf3d3-c5d6-475c-aa96-a768fdbf2e6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-vif-unplugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.626 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.628 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.878 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.893 233728 INFO nova.virt.libvirt.driver [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Deleting instance files /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d_del#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.894 233728 INFO nova.virt.libvirt.driver [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Deletion of /var/lib/nova/instances/d2f10f56-bc70-4ac8-953c-99479942f88d_del complete#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.972 233728 INFO nova.compute.manager [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.973 233728 DEBUG oslo.service.loopingcall [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.973 233728 DEBUG nova.compute.manager [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:26:49 np0005539552 nova_compute[233724]: 2025-11-29 08:26:49.973 233728 DEBUG nova.network.neutron [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:26:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/88554171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.047 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.048 233728 DEBUG nova.virt.libvirt.vif [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1410922991',display_name='tempest-ServerRescueNegativeTestJSON-server-1410922991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1410922991',id=140,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-u00jfp1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:27Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=bb261893-bfa1-4fdc-9c11-a33a733337ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:72:4b:49"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.049 233728 DEBUG nova.network.os_vif_util [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:72:4b:49"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.050 233728 DEBUG nova.network.os_vif_util [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.050 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'pci_devices' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.064 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <uuid>bb261893-bfa1-4fdc-9c11-a33a733337ce</uuid>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <name>instance-0000008c</name>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1410922991</nova:name>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:26:48</nova:creationTime>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:user uuid="283f8136265e4425a5a31f840935b9ab">tempest-ServerRescueNegativeTestJSON-2045177058-project-member</nova:user>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:project uuid="ea7b24ea9d7b4d239b4741634ac3f10c">tempest-ServerRescueNegativeTestJSON-2045177058</nova:project>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <nova:port uuid="0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <entry name="serial">bb261893-bfa1-4fdc-9c11-a33a733337ce</entry>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <entry name="uuid">bb261893-bfa1-4fdc-9c11-a33a733337ce</entry>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.rescue">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/bb261893-bfa1-4fdc-9c11-a33a733337ce_disk">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config.rescue">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:72:4b:49"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <target dev="tap0e2e8c1e-12"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/console.log" append="off"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:26:50 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:26:50 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:26:50 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:26:50 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.070 233728 INFO nova.virt.libvirt.driver [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance destroyed successfully.#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.174 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.175 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.176 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.176 233728 DEBUG nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:72:4b:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.176 233728 INFO nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Using config drive#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.199 233728 DEBUG nova.storage.rbd_utils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.234 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'ec2_ids' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.266 233728 DEBUG nova.objects.instance [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'keypairs' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.602 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.636 233728 DEBUG nova.network.neutron [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.638 233728 DEBUG oslo_concurrency.lockutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.639 233728 DEBUG oslo_concurrency.lockutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.672 233728 INFO nova.compute.manager [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Took 0.70 seconds to deallocate network for instance.#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.674 233728 DEBUG nova.objects.instance [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.740 233728 DEBUG nova.compute.manager [req-cd055071-8eac-4ab7-86c8-968ec93c95f7 req-7629ec73-9bcd-4b78-b416-e91c3d41f644 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-vif-deleted-82c2db07-7de0-4b4c-920a-4a2b41cf480a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.743 233728 DEBUG oslo_concurrency.lockutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.745 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.746 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.764 233728 INFO nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Creating config drive at /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config.rescue#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.774 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1hgxyer execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.873 233728 DEBUG oslo_concurrency.processutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.915 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1hgxyer" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.941 233728 DEBUG nova.storage.rbd_utils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:50 np0005539552 nova_compute[233724]: 2025-11-29 08:26:50.945 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config.rescue bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.013 233728 DEBUG oslo_concurrency.lockutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.014 233728 DEBUG oslo_concurrency.lockutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.014 233728 INFO nova.compute.manager [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Attaching volume 8a80d4c0-d642-46d8-a891-79c20b955b6a to /dev/vdb#033[00m
Nov 29 03:26:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.159 233728 DEBUG os_brick.utils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.161 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.171 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.172 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[8463d6a4-249a-4554-a06b-eb5a9dfd69ef]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.173 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.181 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.181 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[86557c7a-b850-437b-b734-a2d620776cfe]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.183 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.192 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.192 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8f27c5-f90b-4fac-9bc0-f25b8a87c585]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.194 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb749a6-3c40-47e4-8498-5d4189691c94]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.194 233728 DEBUG oslo_concurrency.processutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.226 233728 DEBUG oslo_concurrency.processutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.228 233728 DEBUG os_brick.initiator.connectors.lightos [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.229 233728 DEBUG os_brick.initiator.connectors.lightos [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.229 233728 DEBUG os_brick.initiator.connectors.lightos [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.229 233728 DEBUG os_brick.utils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.230 233728 DEBUG nova.virt.block_device [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating existing volume attachment record: d79b5e8b-7431-43e3-acec-da217fba59ac _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.249 233728 DEBUG oslo_concurrency.processutils [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config.rescue bb261893-bfa1-4fdc-9c11-a33a733337ce_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.249 233728 INFO nova.virt.libvirt.driver [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Deleting local config drive /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:26:51 np0005539552 kernel: tap0e2e8c1e-12: entered promiscuous mode
Nov 29 03:26:51 np0005539552 systemd-udevd[292526]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:51 np0005539552 NetworkManager[48926]: <info>  [1764404811.3012] manager: (tap0e2e8c1e-12): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Nov 29 03:26:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:51Z|00630|binding|INFO|Claiming lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for this chassis.
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.302 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:51Z|00631|binding|INFO|0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d: Claiming fa:16:3e:72:4b:49 10.100.0.13
Nov 29 03:26:51 np0005539552 NetworkManager[48926]: <info>  [1764404811.3117] device (tap0e2e8c1e-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:51 np0005539552 NetworkManager[48926]: <info>  [1764404811.3126] device (tap0e2e8c1e-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.313 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:4b:49 10.100.0.13'], port_security=['fa:16:3e:72:4b:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb261893-bfa1-4fdc-9c11-a33a733337ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.314 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.316 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:26:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:51Z|00632|binding|INFO|Setting lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d ovn-installed in OVS
Nov 29 03:26:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:51Z|00633|binding|INFO|Setting lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d up in Southbound
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.329 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.329 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[700ece62-1a1e-4b93-8d53-dd4cc415b3f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.330 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca67fce-61 in ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.331 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca67fce-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.332 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[178dc98f-27af-4b3c-b1c1-d822732d5165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.332 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[046145a4-72d6-4ae5-a0a7-4c194387dfac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.344 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9560ae-1795-432a-ac26-f0ad5934a830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 systemd-machined[196379]: New machine qemu-63-instance-0000008c.
Nov 29 03:26:51 np0005539552 systemd[1]: Started Virtual Machine qemu-63-instance-0000008c.
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.369 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e188296d-dd4f-4bf8-b9b1-faef7852e8ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/218014964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.398 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1247f2-b84d-4dac-af14-827c5b82def9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.399 233728 DEBUG oslo_concurrency.processutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:51 np0005539552 NetworkManager[48926]: <info>  [1764404811.4057] manager: (tap4ca67fce-60): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.405 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[933bec76-3db8-4741-9d45-cfe970d9a3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.408 233728 DEBUG nova.compute.provider_tree [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.425 233728 DEBUG nova.scheduler.client.report [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.441 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1278d768-9654-4ca4-b651-97076d9943ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.444 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a3f929-dfa3-4d93-bbb2-6727fee97385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.448 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:51 np0005539552 NetworkManager[48926]: <info>  [1764404811.4685] device (tap4ca67fce-60): carrier: link connected
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.472 233728 INFO nova.scheduler.client.report [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Deleted allocations for instance d2f10f56-bc70-4ac8-953c-99479942f88d#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.475 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3b19e4bf-b02d-466a-8b7c-25ce78838e9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.491 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1814f5-c374-445c-8f1c-560b89de733e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292818, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.511 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d10497-ba7c-46b3-9417-e7666e72dbfb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:3cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785715, 'tstamp': 785715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292819, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.531 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2c30232a-a8ef-4217-9832-24cb41073d23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292820, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.534 233728 DEBUG oslo_concurrency.lockutils [None req-443dab97-5621-40c6-9aba-4d859885a3ba ac7e861322894ae387d8f7062a73dddb 14c2056117984f1e86ae3f64a5fee48c - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.568 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d900cfae-d62c-480a-b017-bb18637d2347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.633 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9c60571c-515f-4141-aea4-60b73b55d6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.634 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.635 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.635 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:51 np0005539552 kernel: tap4ca67fce-60: entered promiscuous mode
Nov 29 03:26:51 np0005539552 NetworkManager[48926]: <info>  [1764404811.6376] manager: (tap4ca67fce-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.640 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.641 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:51Z|00634|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.646 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.647 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[85e1dbb1-c101-4617-90f9-459a323a441f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.648 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.pid.haproxy
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 4ca67fce-6116-4a0b-b0a9-c25b5adaad19
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:51.649 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'env', 'PROCESS_TAG=haproxy-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca67fce-6116-4a0b-b0a9-c25b5adaad19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.658 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.771 233728 DEBUG nova.compute.manager [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.772 233728 DEBUG oslo_concurrency.lockutils [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.772 233728 DEBUG oslo_concurrency.lockutils [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.772 233728 DEBUG oslo_concurrency.lockutils [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d2f10f56-bc70-4ac8-953c-99479942f88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.772 233728 DEBUG nova.compute.manager [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] No waiting events found dispatching network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.773 233728 WARNING nova.compute.manager [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Received unexpected event network-vif-plugged-82c2db07-7de0-4b4c-920a-4a2b41cf480a for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.773 233728 DEBUG nova.compute.manager [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.773 233728 DEBUG oslo_concurrency.lockutils [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.774 233728 DEBUG oslo_concurrency.lockutils [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.774 233728 DEBUG oslo_concurrency.lockutils [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.774 233728 DEBUG nova.compute.manager [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:51 np0005539552 nova_compute[233724]: 2025-11-29 08:26:51.774 233728 WARNING nova.compute.manager [req-56a91ed8-cdbf-428f-8ff2-223fc49c4a61 req-1d5504e9-27dc-455a-98a9-ed637aa13259 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received unexpected event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:26:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1887507088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.123 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for bb261893-bfa1-4fdc-9c11-a33a733337ce due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.124 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404812.1235602, bb261893-bfa1-4fdc-9c11-a33a733337ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.125 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.130 233728 DEBUG nova.compute.manager [None req-72a0f5e4-586a-4e7b-bc07-bd74ea585e9e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:52 np0005539552 podman[292906]: 2025-11-29 08:26:52.136185856 +0000 UTC m=+0.104935245 container create 9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.148 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:52 np0005539552 podman[292906]: 2025-11-29 08:26:52.059866682 +0000 UTC m=+0.028616131 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.152 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 03:26:52 np0005539552 systemd[1]: Started libpod-conmon-9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1.scope.
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.215 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.216 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404812.1244771, bb261893-bfa1-4fdc-9c11-a33a733337ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.217 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.229 233728 DEBUG nova.objects.instance [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:52 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:26:52 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f370aeb1afde6a02a55d004f4439bcaf12cf73c643d93dfbb6adcff9bd5968e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:52 np0005539552 podman[292906]: 2025-11-29 08:26:52.256240906 +0000 UTC m=+0.224990315 container init 9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:26:52 np0005539552 podman[292906]: 2025-11-29 08:26:52.262165175 +0000 UTC m=+0.230914574 container start 9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.266 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.271 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.278 233728 DEBUG nova.virt.libvirt.driver [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Attempting to attach volume 8a80d4c0-d642-46d8-a891-79c20b955b6a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.280 233728 DEBUG nova.virt.libvirt.guest [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-8a80d4c0-d642-46d8-a891-79c20b955b6a">
Nov 29 03:26:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:26:52 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:26:52 np0005539552 nova_compute[233724]:  <serial>8a80d4c0-d642-46d8-a891-79c20b955b6a</serial>
Nov 29 03:26:52 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:26:52 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:26:52 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [NOTICE]   (292932) : New worker (292934) forked
Nov 29 03:26:52 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [NOTICE]   (292932) : Loading success.
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.478 233728 DEBUG nova.virt.libvirt.driver [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.479 233728 DEBUG nova.virt.libvirt.driver [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.479 233728 DEBUG nova.virt.libvirt.driver [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.479 233728 DEBUG nova.virt.libvirt.driver [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:71:14:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:52 np0005539552 nova_compute[233724]: 2025-11-29 08:26:52.753 233728 DEBUG oslo_concurrency.lockutils [None req-b25ae970-2c9c-4035-8f74-f5286dc173c4 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:53 np0005539552 nova_compute[233724]: 2025-11-29 08:26:53.841 233728 DEBUG nova.compute.manager [req-f49c585b-1e59-44f7-8a67-dbad14b381dc req-535b3d5c-5493-4540-906b-fbefcefbdd91 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:53 np0005539552 nova_compute[233724]: 2025-11-29 08:26:53.842 233728 DEBUG oslo_concurrency.lockutils [req-f49c585b-1e59-44f7-8a67-dbad14b381dc req-535b3d5c-5493-4540-906b-fbefcefbdd91 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:53 np0005539552 nova_compute[233724]: 2025-11-29 08:26:53.842 233728 DEBUG oslo_concurrency.lockutils [req-f49c585b-1e59-44f7-8a67-dbad14b381dc req-535b3d5c-5493-4540-906b-fbefcefbdd91 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:53 np0005539552 nova_compute[233724]: 2025-11-29 08:26:53.842 233728 DEBUG oslo_concurrency.lockutils [req-f49c585b-1e59-44f7-8a67-dbad14b381dc req-535b3d5c-5493-4540-906b-fbefcefbdd91 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:53 np0005539552 nova_compute[233724]: 2025-11-29 08:26:53.843 233728 DEBUG nova.compute.manager [req-f49c585b-1e59-44f7-8a67-dbad14b381dc req-535b3d5c-5493-4540-906b-fbefcefbdd91 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:53 np0005539552 nova_compute[233724]: 2025-11-29 08:26:53.843 233728 WARNING nova.compute.manager [req-f49c585b-1e59-44f7-8a67-dbad14b381dc req-535b3d5c-5493-4540-906b-fbefcefbdd91 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received unexpected event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:26:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:54.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:54 np0005539552 nova_compute[233724]: 2025-11-29 08:26:54.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:54 np0005539552 nova_compute[233724]: 2025-11-29 08:26:54.104 233728 INFO nova.compute.manager [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Rescuing#033[00m
Nov 29 03:26:54 np0005539552 nova_compute[233724]: 2025-11-29 08:26:54.104 233728 DEBUG oslo_concurrency.lockutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:54 np0005539552 nova_compute[233724]: 2025-11-29 08:26:54.105 233728 DEBUG oslo_concurrency.lockutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:54 np0005539552 nova_compute[233724]: 2025-11-29 08:26:54.105 233728 DEBUG nova.network.neutron [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:55.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:55Z|00635|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:55Z|00636|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:26:55 np0005539552 nova_compute[233724]: 2025-11-29 08:26:55.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:55 np0005539552 nova_compute[233724]: 2025-11-29 08:26:55.785 233728 DEBUG nova.network.neutron [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:55 np0005539552 nova_compute[233724]: 2025-11-29 08:26:55.810 233728 DEBUG oslo_concurrency.lockutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:56.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:56 np0005539552 nova_compute[233724]: 2025-11-29 08:26:56.577 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:56 np0005539552 nova_compute[233724]: 2025-11-29 08:26:56.820 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:26:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:57.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:57 np0005539552 nova_compute[233724]: 2025-11-29 08:26:57.511 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:26:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:59.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:59 np0005539552 kernel: tap91d5abbf-fd (unregistering): left promiscuous mode
Nov 29 03:26:59 np0005539552 NetworkManager[48926]: <info>  [1764404819.0981] device (tap91d5abbf-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.100 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.127 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:59Z|00637|binding|INFO|Releasing lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 from this chassis (sb_readonly=0)
Nov 29 03:26:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:59Z|00638|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 down in Southbound
Nov 29 03:26:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:26:59Z|00639|binding|INFO|Removing iface tap91d5abbf-fd ovn-installed in OVS
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.135 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:14:96 10.100.0.4'], port_security=['fa:16:3e:71:14:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b3bcb98f-0ed3-4c70-97d6-4df1974ced71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=91d5abbf-fd67-487f-bfaa-448b1daa5272) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.137 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 91d5abbf-fd67-487f-bfaa-448b1daa5272 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.139 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.143 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[736c9bfa-028c-460e-98d1-704a08c2ce74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.144 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.153 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:59 np0005539552 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 29 03:26:59 np0005539552 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000008a.scope: Consumed 15.715s CPU time.
Nov 29 03:26:59 np0005539552 systemd-machined[196379]: Machine qemu-60-instance-0000008a terminated.
Nov 29 03:26:59 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [NOTICE]   (290875) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:59 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [NOTICE]   (290875) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:59 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [WARNING]  (290875) : Exiting Master process...
Nov 29 03:26:59 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [WARNING]  (290875) : Exiting Master process...
Nov 29 03:26:59 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [ALERT]    (290875) : Current worker (290881) exited with code 143 (Terminated)
Nov 29 03:26:59 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[290853]: [WARNING]  (290875) : All workers exited. Exiting... (0)
Nov 29 03:26:59 np0005539552 systemd[1]: libpod-718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea.scope: Deactivated successfully.
Nov 29 03:26:59 np0005539552 podman[292990]: 2025-11-29 08:26:59.267729854 +0000 UTC m=+0.041802675 container died 718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:26:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:59 np0005539552 systemd[1]: var-lib-containers-storage-overlay-cecd38971edec5433ccadd379d05272e0e1381bc47b1a56343d1fde220745b45-merged.mount: Deactivated successfully.
Nov 29 03:26:59 np0005539552 podman[292990]: 2025-11-29 08:26:59.309753445 +0000 UTC m=+0.083826256 container cleanup 718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:59 np0005539552 systemd[1]: libpod-conmon-718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea.scope: Deactivated successfully.
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.482 233728 DEBUG nova.compute.manager [req-2b89e7ce-dbae-4648-818e-3880d4be2955 req-8213543a-7e72-4981-b56d-d00fbcd23571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.483 233728 DEBUG oslo_concurrency.lockutils [req-2b89e7ce-dbae-4648-818e-3880d4be2955 req-8213543a-7e72-4981-b56d-d00fbcd23571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.483 233728 DEBUG oslo_concurrency.lockutils [req-2b89e7ce-dbae-4648-818e-3880d4be2955 req-8213543a-7e72-4981-b56d-d00fbcd23571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.484 233728 DEBUG oslo_concurrency.lockutils [req-2b89e7ce-dbae-4648-818e-3880d4be2955 req-8213543a-7e72-4981-b56d-d00fbcd23571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.484 233728 DEBUG nova.compute.manager [req-2b89e7ce-dbae-4648-818e-3880d4be2955 req-8213543a-7e72-4981-b56d-d00fbcd23571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.484 233728 WARNING nova.compute.manager [req-2b89e7ce-dbae-4648-818e-3880d4be2955 req-8213543a-7e72-4981-b56d-d00fbcd23571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:26:59 np0005539552 podman[293021]: 2025-11-29 08:26:59.497166328 +0000 UTC m=+0.167018415 container remove 718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.504 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[16478768-0f79-49be-a5f5-5a75c4ac711d]: (4, ('Sat Nov 29 08:26:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea)\n718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea\nSat Nov 29 08:26:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea)\n718998a9663ae0d6b62e4241c3a07650c8164a91acc0530929423f405d36edea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.506 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[85b63bcb-143c-4fb7-8bfe-9b038624664a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.507 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.510 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:59 np0005539552 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.538 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.540 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[385391b2-ef57-42e6-b321-ceb96c8b7225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.552 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[49f86d81-64ce-4c70-98c1-2dfdff8385f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.553 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[843c970b-af8b-4a01-9689-7b873c95dadc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.573 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c0fa37-04cb-43cd-99f1-a71f63f2ab3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782473, 'reachable_time': 28062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293049, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.577 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:26:59.577 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[48a616c5-827c-410f-ae06-2bea95ebb505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.837 233728 INFO nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.841 233728 INFO nova.virt.libvirt.driver [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance destroyed successfully.#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.842 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:59 np0005539552 nova_compute[233724]: 2025-11-29 08:26:59.856 233728 INFO nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Attempting a stable device rescue#033[00m
Nov 29 03:27:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.501 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.508 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.509 233728 INFO nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Creating image(s)#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.538 233728 DEBUG nova.storage.rbd_utils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.541 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.588 233728 DEBUG nova.storage.rbd_utils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.616 233728 DEBUG nova.storage.rbd_utils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.619 233728 DEBUG oslo_concurrency.lockutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "abf94ae5f8e93123cf9f1ffe08e52d084a708beb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.620 233728 DEBUG oslo_concurrency.lockutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "abf94ae5f8e93123cf9f1ffe08e52d084a708beb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.848 233728 DEBUG nova.virt.libvirt.imagebackend [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f5aee037-dd13-47e6-81af-147c317d7457/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f5aee037-dd13-47e6-81af-147c317d7457/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.899 233728 DEBUG nova.virt.libvirt.imagebackend [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f5aee037-dd13-47e6-81af-147c317d7457/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:27:00 np0005539552 nova_compute[233724]: 2025-11-29 08:27:00.900 233728 DEBUG nova.storage.rbd_utils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning images/f5aee037-dd13-47e6-81af-147c317d7457@snap to None/50daa6f5-6598-439f-a542-38e8ae7aded0_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:27:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.420 233728 DEBUG oslo_concurrency.lockutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "abf94ae5f8e93123cf9f1ffe08e52d084a708beb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.479 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.524 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.528 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Start _get_guest_xml network_info=[{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:71:14:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f5aee037-dd13-47e6-81af-147c317d7457', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8a80d4c0-d642-46d8-a891-79c20b955b6a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8a80d4c0-d642-46d8-a891-79c20b955b6a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'attached_at': '', 'detached_at': '', 'volume_id': '8a80d4c0-d642-46d8-a891-79c20b955b6a', 'serial': '8a80d4c0-d642-46d8-a891-79c20b955b6a'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'attachment_id': 'd79b5e8b-7431-43e3-acec-da217fba59ac', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.528 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.552 233728 WARNING nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.557 233728 DEBUG nova.virt.libvirt.host [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.558 233728 DEBUG nova.virt.libvirt.host [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.561 233728 DEBUG nova.compute.manager [req-8597f96e-a4fb-42b6-91c3-61ab6078ca0b req-fad120f3-d6ff-4c24-86bf-180682de922f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.561 233728 DEBUG oslo_concurrency.lockutils [req-8597f96e-a4fb-42b6-91c3-61ab6078ca0b req-fad120f3-d6ff-4c24-86bf-180682de922f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.561 233728 DEBUG oslo_concurrency.lockutils [req-8597f96e-a4fb-42b6-91c3-61ab6078ca0b req-fad120f3-d6ff-4c24-86bf-180682de922f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.562 233728 DEBUG oslo_concurrency.lockutils [req-8597f96e-a4fb-42b6-91c3-61ab6078ca0b req-fad120f3-d6ff-4c24-86bf-180682de922f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.562 233728 DEBUG nova.compute.manager [req-8597f96e-a4fb-42b6-91c3-61ab6078ca0b req-fad120f3-d6ff-4c24-86bf-180682de922f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.562 233728 WARNING nova.compute.manager [req-8597f96e-a4fb-42b6-91c3-61ab6078ca0b req-fad120f3-d6ff-4c24-86bf-180682de922f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.565 233728 DEBUG nova.virt.libvirt.host [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.567 233728 DEBUG nova.virt.libvirt.host [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.568 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.568 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.569 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.569 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.569 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.570 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.570 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.570 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.571 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.571 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.571 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.571 233728 DEBUG nova.virt.hardware [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.572 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.579 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539552 nova_compute[233724]: 2025-11-29 08:27:01.588 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4145068896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:02 np0005539552 nova_compute[233724]: 2025-11-29 08:27:02.141 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:02 np0005539552 nova_compute[233724]: 2025-11-29 08:27:02.185 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/750126085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:02 np0005539552 nova_compute[233724]: 2025-11-29 08:27:02.606 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:02 np0005539552 nova_compute[233724]: 2025-11-29 08:27:02.743 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/453545199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.195 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.197 233728 DEBUG nova.virt.libvirt.vif [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-419138925',display_name='tempest-ServerStableDeviceRescueTest-server-419138925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-419138925',id=138,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDT3yKh1OQM/APLMkF69FKCCA5nBzuP29507Q5By6a2JvA70+RYIsFvSoa6+Z7yspJ+R+0ak0hPbMHRp8sVSoGCRfzVOzgUhwEXFYk/q/u+LWAX+bPUz2Gc3J/eOzCbxKw==',key_name='tempest-keypair-1950933442',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-c9jn2vfu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='873186539acb4bf9b90513e0e1beb56f',uuid=50daa6f5-6598-439f-a542-38e8ae7aded0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:71:14:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.198 233728 DEBUG nova.network.os_vif_util [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:71:14:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.199 233728 DEBUG nova.network.os_vif_util [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.201 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.219 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <uuid>50daa6f5-6598-439f-a542-38e8ae7aded0</uuid>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <name>instance-0000008a</name>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-419138925</nova:name>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:27:01</nova:creationTime>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <nova:port uuid="91d5abbf-fd67-487f-bfaa-448b1daa5272">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <entry name="serial">50daa6f5-6598-439f-a542-38e8ae7aded0</entry>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <entry name="uuid">50daa6f5-6598-439f-a542-38e8ae7aded0</entry>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/50daa6f5-6598-439f-a542-38e8ae7aded0_disk">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-8a80d4c0-d642-46d8-a891-79c20b955b6a">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <serial>8a80d4c0-d642-46d8-a891-79c20b955b6a</serial>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/50daa6f5-6598-439f-a542-38e8ae7aded0_disk.rescue">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <target dev="vdc" bus="virtio"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <boot order="1"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:71:14:96"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <target dev="tap91d5abbf-fd"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/console.log" append="off"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:27:03 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:27:03 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:27:03 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:27:03 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.234 233728 INFO nova.virt.libvirt.driver [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance destroyed successfully.#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.297 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.297 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.297 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.297 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.297 233728 DEBUG nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:71:14:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.298 233728 INFO nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Using config drive#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.326 233728 DEBUG nova.storage.rbd_utils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.345 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.391 233728 DEBUG nova.objects.instance [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'keypairs' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.861 233728 INFO nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Creating config drive at /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config.rescue#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.867 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjsowma9n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.965 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404808.9639392, d2f10f56-bc70-4ac8-953c-99479942f88d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.966 233728 INFO nova.compute.manager [-] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:27:03 np0005539552 nova_compute[233724]: 2025-11-29 08:27:03.988 233728 DEBUG nova.compute.manager [None req-e512e503-ba90-4ec8-baf4-92ae67a1a83c - - - - - -] [instance: d2f10f56-bc70-4ac8-953c-99479942f88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.009 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjsowma9n" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.039 233728 DEBUG nova.storage.rbd_utils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.043 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config.rescue 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.101 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.623 233728 DEBUG oslo_concurrency.processutils [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config.rescue 50daa6f5-6598-439f-a542-38e8ae7aded0_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.624 233728 INFO nova.virt.libvirt.driver [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Deleting local config drive /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:27:04 np0005539552 kernel: tap91d5abbf-fd: entered promiscuous mode
Nov 29 03:27:04 np0005539552 NetworkManager[48926]: <info>  [1764404824.6872] manager: (tap91d5abbf-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Nov 29 03:27:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:04Z|00640|binding|INFO|Claiming lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 for this chassis.
Nov 29 03:27:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:04Z|00641|binding|INFO|91d5abbf-fd67-487f-bfaa-448b1daa5272: Claiming fa:16:3e:71:14:96 10.100.0.4
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:04Z|00642|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 ovn-installed in OVS
Nov 29 03:27:04 np0005539552 nova_compute[233724]: 2025-11-29 08:27:04.714 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:04 np0005539552 systemd-udevd[293401]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:04 np0005539552 systemd-machined[196379]: New machine qemu-64-instance-0000008a.
Nov 29 03:27:04 np0005539552 NetworkManager[48926]: <info>  [1764404824.7344] device (tap91d5abbf-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:04 np0005539552 NetworkManager[48926]: <info>  [1764404824.7356] device (tap91d5abbf-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:04 np0005539552 systemd[1]: Started Virtual Machine qemu-64-instance-0000008a.
Nov 29 03:27:04 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:04Z|00643|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 up in Southbound
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.916 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:14:96 10.100.0.4'], port_security=['fa:16:3e:71:14:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b3bcb98f-0ed3-4c70-97d6-4df1974ced71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=91d5abbf-fd67-487f-bfaa-448b1daa5272) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.918 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 91d5abbf-fd67-487f-bfaa-448b1daa5272 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.919 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.930 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[13a7459c-2069-495d-a9df-462e75d77ff2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.931 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.933 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.933 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae716336-d2de-4204-bda4-f9af4dc0a3b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.934 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3498ee25-9407-4508-9ee2-e8ed30e7bff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.950 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[cd30901f-99ea-43fb-9d03-bcd68096fd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:04.964 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[45c0ab0d-5d90-4855-8095-3899e882407e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.003 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4eedea-a849-4cc1-96a1-e782c3d2f032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 NetworkManager[48926]: <info>  [1764404825.0124] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/289)
Nov 29 03:27:05 np0005539552 systemd-udevd[293404]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.015 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5683d5e2-524c-439e-9ab6-a6b730d7ca76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.050 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a57b679e-7d0b-4f53-a53d-63993b9e930b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.053 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[77a26202-ff82-4cc2-be65-a709227e68e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 NetworkManager[48926]: <info>  [1764404825.0775] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.086 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[28499261-3339-4907-a769-73405c749f14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:05.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.110 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[806e39f5-e0e0-4043-96d2-34664f8fa5b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787076, 'reachable_time': 16286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293506, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.132 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6955ad4d-4e26-44e9-91da-0b7a3ef641ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787076, 'tstamp': 787076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293510, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.147 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[65d55fe3-98d1-4289-be8e-5c6fc1c9e90f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787076, 'reachable_time': 16286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293515, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.176 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f6582694-220e-4f2c-9b23-767a4b404d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.213 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 50daa6f5-6598-439f-a542-38e8ae7aded0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.214 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404825.2132962, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:05Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:4b:49 10.100.0.13
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.214 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:05Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:4b:49 10.100.0.13
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.221 233728 DEBUG nova.compute.manager [None req-ea6ae335-28ec-4dfd-961d-f37a7c429c3f 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.238 233728 DEBUG nova.compute.manager [req-5138d930-2716-402f-8b46-c2f65b0bebe6 req-fdb4cc86-5949-44bf-9515-760011fcf0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.239 233728 DEBUG oslo_concurrency.lockutils [req-5138d930-2716-402f-8b46-c2f65b0bebe6 req-fdb4cc86-5949-44bf-9515-760011fcf0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.239 233728 DEBUG oslo_concurrency.lockutils [req-5138d930-2716-402f-8b46-c2f65b0bebe6 req-fdb4cc86-5949-44bf-9515-760011fcf0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.240 233728 DEBUG oslo_concurrency.lockutils [req-5138d930-2716-402f-8b46-c2f65b0bebe6 req-fdb4cc86-5949-44bf-9515-760011fcf0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.240 233728 DEBUG nova.compute.manager [req-5138d930-2716-402f-8b46-c2f65b0bebe6 req-fdb4cc86-5949-44bf-9515-760011fcf0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.241 233728 WARNING nova.compute.manager [req-5138d930-2716-402f-8b46-c2f65b0bebe6 req-fdb4cc86-5949-44bf-9515-760011fcf0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.243 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7744af-379e-4d95-b25d-efbd4fbb4827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.244 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.244 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.244 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.246 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539552 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:27:05 np0005539552 NetworkManager[48926]: <info>  [1764404825.2478] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.247 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.249 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.250 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:05Z|00644|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.263 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.266 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.269 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.270 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.271 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[16cc69c2-c584-4243-8488-fabc94f5b6e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.272 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:05.273 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.300 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.300 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404825.2166984, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.300 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.346 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.349 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:05 np0005539552 podman[293547]: 2025-11-29 08:27:05.608864277 +0000 UTC m=+0.044129759 container create b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:27:05 np0005539552 systemd[1]: Started libpod-conmon-b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030.scope.
Nov 29 03:27:05 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:27:05 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b9481a934ca5c1f4a20cf17279f490a0f10d529be24dbe9c45dbdeb5626e73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:05 np0005539552 podman[293547]: 2025-11-29 08:27:05.58556164 +0000 UTC m=+0.020827122 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:05 np0005539552 podman[293547]: 2025-11-29 08:27:05.695897468 +0000 UTC m=+0.131162980 container init b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:05 np0005539552 podman[293547]: 2025-11-29 08:27:05.701036937 +0000 UTC m=+0.136302429 container start b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:27:05 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [NOTICE]   (293566) : New worker (293568) forked
Nov 29 03:27:05 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [NOTICE]   (293566) : Loading success.
Nov 29 03:27:05 np0005539552 nova_compute[233724]: 2025-11-29 08:27:05.905 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:06 np0005539552 nova_compute[233724]: 2025-11-29 08:27:06.580 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.248 233728 INFO nova.compute.manager [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Unrescuing#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.248 233728 DEBUG oslo_concurrency.lockutils [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.248 233728 DEBUG oslo_concurrency.lockutils [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.249 233728 DEBUG nova.network.neutron [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.402 233728 DEBUG nova.compute.manager [req-44a99eca-da2c-4966-85df-cfcaf4e61203 req-328a4a8d-0abd-4fc8-8179-7b8cb38f7d75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.402 233728 DEBUG oslo_concurrency.lockutils [req-44a99eca-da2c-4966-85df-cfcaf4e61203 req-328a4a8d-0abd-4fc8-8179-7b8cb38f7d75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.402 233728 DEBUG oslo_concurrency.lockutils [req-44a99eca-da2c-4966-85df-cfcaf4e61203 req-328a4a8d-0abd-4fc8-8179-7b8cb38f7d75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.402 233728 DEBUG oslo_concurrency.lockutils [req-44a99eca-da2c-4966-85df-cfcaf4e61203 req-328a4a8d-0abd-4fc8-8179-7b8cb38f7d75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.403 233728 DEBUG nova.compute.manager [req-44a99eca-da2c-4966-85df-cfcaf4e61203 req-328a4a8d-0abd-4fc8-8179-7b8cb38f7d75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:07 np0005539552 nova_compute[233724]: 2025-11-29 08:27:07.403 233728 WARNING nova.compute.manager [req-44a99eca-da2c-4966-85df-cfcaf4e61203 req-328a4a8d-0abd-4fc8-8179-7b8cb38f7d75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:27:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.582 233728 DEBUG nova.network.neutron [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.595 233728 DEBUG oslo_concurrency.lockutils [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.596 233728 DEBUG nova.objects.instance [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:08 np0005539552 kernel: tap91d5abbf-fd (unregistering): left promiscuous mode
Nov 29 03:27:08 np0005539552 NetworkManager[48926]: <info>  [1764404828.6837] device (tap91d5abbf-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.686 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:08Z|00645|binding|INFO|Releasing lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 from this chassis (sb_readonly=0)
Nov 29 03:27:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:08Z|00646|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 down in Southbound
Nov 29 03:27:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:08Z|00647|binding|INFO|Removing iface tap91d5abbf-fd ovn-installed in OVS
Nov 29 03:27:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:08.694 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:14:96 10.100.0.4'], port_security=['fa:16:3e:71:14:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b3bcb98f-0ed3-4c70-97d6-4df1974ced71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=91d5abbf-fd67-487f-bfaa-448b1daa5272) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:08.696 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 91d5abbf-fd67-487f-bfaa-448b1daa5272 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:27:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:08.698 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:08.699 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7649a4-a5fc-483c-9f8a-380a6ba88fda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:08.700 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.709 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539552 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 29 03:27:08 np0005539552 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008a.scope: Consumed 4.044s CPU time.
Nov 29 03:27:08 np0005539552 systemd-machined[196379]: Machine qemu-64-instance-0000008a terminated.
Nov 29 03:27:08 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [NOTICE]   (293566) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:08 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [NOTICE]   (293566) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:08 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [WARNING]  (293566) : Exiting Master process...
Nov 29 03:27:08 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [WARNING]  (293566) : Exiting Master process...
Nov 29 03:27:08 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [ALERT]    (293566) : Current worker (293568) exited with code 143 (Terminated)
Nov 29 03:27:08 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293562]: [WARNING]  (293566) : All workers exited. Exiting... (0)
Nov 29 03:27:08 np0005539552 systemd[1]: libpod-b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030.scope: Deactivated successfully.
Nov 29 03:27:08 np0005539552 podman[293603]: 2025-11-29 08:27:08.823340394 +0000 UTC m=+0.043512321 container died b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.840 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.847 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.851 233728 INFO nova.virt.libvirt.driver [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance destroyed successfully.#033[00m
Nov 29 03:27:08 np0005539552 nova_compute[233724]: 2025-11-29 08:27:08.851 233728 DEBUG nova.objects.instance [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.104 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.312 233728 DEBUG nova.compute.manager [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.313 233728 DEBUG nova.compute.manager [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing instance network info cache due to event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.315 233728 DEBUG oslo_concurrency.lockutils [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.315 233728 DEBUG oslo_concurrency.lockutils [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.315 233728 DEBUG nova.network.neutron [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:09 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:09 np0005539552 systemd[1]: var-lib-containers-storage-overlay-13b9481a934ca5c1f4a20cf17279f490a0f10d529be24dbe9c45dbdeb5626e73-merged.mount: Deactivated successfully.
Nov 29 03:27:09 np0005539552 systemd-udevd[293583]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:09 np0005539552 NetworkManager[48926]: <info>  [1764404829.3802] manager: (tap91d5abbf-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Nov 29 03:27:09 np0005539552 kernel: tap91d5abbf-fd: entered promiscuous mode
Nov 29 03:27:09 np0005539552 NetworkManager[48926]: <info>  [1764404829.3902] device (tap91d5abbf-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:09Z|00648|binding|INFO|Claiming lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 for this chassis.
Nov 29 03:27:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:09Z|00649|binding|INFO|91d5abbf-fd67-487f-bfaa-448b1daa5272: Claiming fa:16:3e:71:14:96 10.100.0.4
Nov 29 03:27:09 np0005539552 NetworkManager[48926]: <info>  [1764404829.3919] device (tap91d5abbf-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.394 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:09Z|00650|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 ovn-installed in OVS
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.410 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539552 systemd-machined[196379]: New machine qemu-65-instance-0000008a.
Nov 29 03:27:09 np0005539552 systemd[1]: Started Virtual Machine qemu-65-instance-0000008a.
Nov 29 03:27:09 np0005539552 podman[293603]: 2025-11-29 08:27:09.487008171 +0000 UTC m=+0.707180108 container cleanup b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:09Z|00651|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 up in Southbound
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.491 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:14:96 10.100.0.4'], port_security=['fa:16:3e:71:14:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b3bcb98f-0ed3-4c70-97d6-4df1974ced71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=91d5abbf-fd67-487f-bfaa-448b1daa5272) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.494 233728 DEBUG nova.compute.manager [req-ac70a4fc-192b-4d0d-a80e-afa993fe6c23 req-2cb02544-dd8c-486b-9c68-fbc7b746a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.494 233728 DEBUG oslo_concurrency.lockutils [req-ac70a4fc-192b-4d0d-a80e-afa993fe6c23 req-2cb02544-dd8c-486b-9c68-fbc7b746a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.494 233728 DEBUG oslo_concurrency.lockutils [req-ac70a4fc-192b-4d0d-a80e-afa993fe6c23 req-2cb02544-dd8c-486b-9c68-fbc7b746a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.495 233728 DEBUG oslo_concurrency.lockutils [req-ac70a4fc-192b-4d0d-a80e-afa993fe6c23 req-2cb02544-dd8c-486b-9c68-fbc7b746a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.495 233728 DEBUG nova.compute.manager [req-ac70a4fc-192b-4d0d-a80e-afa993fe6c23 req-2cb02544-dd8c-486b-9c68-fbc7b746a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.495 233728 WARNING nova.compute.manager [req-ac70a4fc-192b-4d0d-a80e-afa993fe6c23 req-2cb02544-dd8c-486b-9c68-fbc7b746a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:27:09 np0005539552 podman[293661]: 2025-11-29 08:27:09.939195288 +0000 UTC m=+0.423785534 container remove b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.944 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7442a37d-72c2-4be3-9d63-947965c8853d]: (4, ('Sat Nov 29 08:27:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030)\nb577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030\nSat Nov 29 08:27:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030)\nb577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.945 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c5935728-b400-4d1f-ab5e-0ab68ebe9f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.946 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:09 np0005539552 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.948 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539552 nova_compute[233724]: 2025-11-29 08:27:09.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.968 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[551e44fd-3c51-4996-b1b0-37df4bc46cec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.981 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4bafd60b-c6eb-4ada-a7c7-2f8b45d89c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.982 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[95ca4689-8f88-448c-b1f9-ae2a66159111]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539552 systemd[1]: libpod-conmon-b577651df8420951b6764e67621ad06ebe2d96e459da533799aafbb2abd49030.scope: Deactivated successfully.
Nov 29 03:27:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.997 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c96a37db-a765-4d77-8a7d-a638eb822877]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787068, 'reachable_time': 29120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293730, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.999 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:09.999 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b60e9522-c80a-4dce-acf9-8899fa95f66c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.000 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 91d5abbf-fd67-487f-bfaa-448b1daa5272 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:27:10 np0005539552 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.002 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.010 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b6ac8d-d0cb-4d3d-9f37-27a21ec259bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.011 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.012 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.012 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ca529a0a-0879-4851-bcdb-386add08f953]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.013 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee71d85-255c-4fde-9226-2aaa404d1ce1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.025 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ff401e-0503-417c-9454-56e9fcb2b9b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.047 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c582b5c7-06f9-42d2-8e01-11ac134edfea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.075 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a9201b97-84fb-40f9-8c79-9ba74bcfc125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 NetworkManager[48926]: <info>  [1764404830.0838] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.083 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2c902b96-bbe4-46fe-a611-7e54122d6f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.097 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 50daa6f5-6598-439f-a542-38e8ae7aded0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.099 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404830.0976567, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.099 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.118 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[42899377-f60a-4a89-a03a-8ac75dfcaa0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.125 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.126 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[97027f1b-7ca1-4dce-963c-cd496082bbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.129 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:10 np0005539552 NetworkManager[48926]: <info>  [1764404830.1525] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.154 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.154 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404830.0983732, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.154 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.158 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[20a0ab56-8421-4a83-b0bb-7348a53179d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.174 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[addc6edf-6d2b-43e7-b13a-c60f8b49e942]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787584, 'reachable_time': 17881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293780, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.177 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.180 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.190 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[581cf2cd-524c-4de4-a38e-211600e8ba8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787584, 'tstamp': 787584}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293781, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.205 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f856576d-d727-46e4-b469-6ba38ce2a7ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787584, 'reachable_time': 17881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293782, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.215 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.232 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d620981c-5425-4d40-ba8d-8acd6c54546b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.306 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe63d30-51d3-43d6-9785-09670bb53561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.307 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.307 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.308 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:10 np0005539552 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:27:10 np0005539552 NetworkManager[48926]: <info>  [1764404830.3106] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.310 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.314 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.314 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:10Z|00652|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.336 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.337 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.338 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1b36c220-4a2f-4809-bbdc-9fc67e90b7c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.339 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:10.339 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.479 233728 DEBUG nova.compute.manager [None req-c245fc50-acfb-483a-be8a-f7a5c3080a1a 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539552 podman[293815]: 2025-11-29 08:27:10.752384477 +0000 UTC m=+0.052110663 container create d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:10 np0005539552 systemd[1]: Started libpod-conmon-d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64.scope.
Nov 29 03:27:10 np0005539552 podman[293815]: 2025-11-29 08:27:10.72869195 +0000 UTC m=+0.028418156 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:10 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:27:10 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f7f06859b7627d298b7ae1938d91ec4aaf5acdefc0267de677c733b96dd326/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:10 np0005539552 podman[293815]: 2025-11-29 08:27:10.84354776 +0000 UTC m=+0.143274006 container init d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:10 np0005539552 podman[293815]: 2025-11-29 08:27:10.852421569 +0000 UTC m=+0.152147765 container start d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:10 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [NOTICE]   (293835) : New worker (293837) forked
Nov 29 03:27:10 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [NOTICE]   (293835) : Loading success.
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.924 233728 DEBUG nova.network.neutron [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updated VIF entry in instance network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.924 233728 DEBUG nova.network.neutron [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:10 np0005539552 nova_compute[233724]: 2025-11-29 08:27:10.948 233728 DEBUG oslo_concurrency.lockutils [req-89d00778-eefb-43a8-9198-dd6f39764373 req-682ba732-a524-413e-9cbf-86dcfe7d6992 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.385 233728 DEBUG nova.compute.manager [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.385 233728 DEBUG nova.compute.manager [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing instance network info cache due to event network-changed-91d5abbf-fd67-487f-bfaa-448b1daa5272. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.386 233728 DEBUG oslo_concurrency.lockutils [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.386 233728 DEBUG oslo_concurrency.lockutils [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.387 233728 DEBUG nova.network.neutron [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Refreshing network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.582 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.593 233728 DEBUG nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.595 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.596 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.596 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.597 233728 DEBUG nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.598 233728 WARNING nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.599 233728 DEBUG nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.600 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.600 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.601 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.602 233728 DEBUG nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.603 233728 WARNING nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.604 233728 DEBUG nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.605 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.605 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.606 233728 DEBUG oslo_concurrency.lockutils [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.607 233728 DEBUG nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:11 np0005539552 nova_compute[233724]: 2025-11-29 08:27:11.608 233728 WARNING nova.compute.manager [req-8286beb7-79f3-4ae8-aae4-f1773a5830c1 req-e773653c-6372-4b3a-87b8-5de9c8d6a725 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:12 np0005539552 nova_compute[233724]: 2025-11-29 08:27:12.820 233728 DEBUG nova.network.neutron [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updated VIF entry in instance network info cache for port 91d5abbf-fd67-487f-bfaa-448b1daa5272. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:12 np0005539552 nova_compute[233724]: 2025-11-29 08:27:12.820 233728 DEBUG nova.network.neutron [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [{"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:12 np0005539552 nova_compute[233724]: 2025-11-29 08:27:12.838 233728 DEBUG oslo_concurrency.lockutils [req-7ff92114-09bf-4603-b256-e2dfdb990337 req-52ab7d6e-bbb6-400e-b575-f4d6f5fac01a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-50daa6f5-6598-439f-a542-38e8ae7aded0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:14 np0005539552 nova_compute[233724]: 2025-11-29 08:27:14.107 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:16 np0005539552 nova_compute[233724]: 2025-11-29 08:27:16.587 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:16 np0005539552 podman[293852]: 2025-11-29 08:27:16.969543132 +0000 UTC m=+0.060985012 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:27:16 np0005539552 podman[293851]: 2025-11-29 08:27:16.99657709 +0000 UTC m=+0.081850984 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:27:16 np0005539552 podman[293853]: 2025-11-29 08:27:16.996657672 +0000 UTC m=+0.085968044 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:27:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:17.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:19 np0005539552 nova_compute[233724]: 2025-11-29 08:27:19.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:20.634 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:21 np0005539552 nova_compute[233724]: 2025-11-29 08:27:21.589 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:22.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:22Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:14:96 10.100.0.4
Nov 29 03:27:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:23.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:24.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:24 np0005539552 nova_compute[233724]: 2025-11-29 08:27:24.112 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:27:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:25.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:27:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:26.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:26 np0005539552 nova_compute[233724]: 2025-11-29 08:27:26.591 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:27.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:29 np0005539552 nova_compute[233724]: 2025-11-29 08:27:29.115 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 03:27:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:30 np0005539552 nova_compute[233724]: 2025-11-29 08:27:30.943 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:30 np0005539552 nova_compute[233724]: 2025-11-29 08:27:30.943 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:30 np0005539552 nova_compute[233724]: 2025-11-29 08:27:30.959 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.029 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.030 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.037 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.037 233728 INFO nova.compute.claims [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:27:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:31.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.200 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.593 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1089129319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.647 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.657 233728 DEBUG nova.compute.provider_tree [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.674 233728 DEBUG nova.scheduler.client.report [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.721 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.722 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.782 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.782 233728 DEBUG nova.network.neutron [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.806 233728 INFO nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.830 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.912 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.913 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.914 233728 INFO nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Creating image(s)#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.940 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.970 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:31 np0005539552 nova_compute[233724]: 2025-11-29 08:27:31.998 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.002 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.033 233728 DEBUG nova.policy [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '283f8136265e4425a5a31f840935b9ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:27:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:32.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.080 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.080 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.081 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.081 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.106 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.110 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d6363749-92b8-41e7-860c-63dc695390e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.392 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d6363749-92b8-41e7-860c-63dc695390e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.458 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] resizing rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.544 233728 DEBUG nova.objects.instance [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'migration_context' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.560 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.560 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Ensure instance console log exists: /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.561 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.561 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.561 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 03:27:32 np0005539552 nova_compute[233724]: 2025-11-29 08:27:32.924 233728 DEBUG nova.network.neutron [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Successfully created port: a6621efc-2904-4858-9a98-9c441a64d2ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:27:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.697 233728 DEBUG nova.network.neutron [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Successfully updated port: a6621efc-2904-4858-9a98-9c441a64d2ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.722 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.722 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.722 233728 DEBUG nova.network.neutron [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.840 233728 DEBUG nova.compute.manager [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.841 233728 DEBUG nova.compute.manager [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing instance network info cache due to event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.841 233728 DEBUG oslo_concurrency.lockutils [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.893 233728 DEBUG nova.network.neutron [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:27:33 np0005539552 nova_compute[233724]: 2025-11-29 08:27:33.946 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.118 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2730725317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.444 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.521 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.521 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.521 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.525 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.525 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.526 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.712 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.713 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3858MB free_disk=20.627620697021484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.713 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.714 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.800 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 50daa6f5-6598-439f-a542-38e8ae7aded0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.800 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance bb261893-bfa1-4fdc-9c11-a33a733337ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.801 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance d6363749-92b8-41e7-860c-63dc695390e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.801 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.801 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.875 233728 DEBUG nova.network.neutron [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.934 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.973 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.974 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance network_info: |[{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.975 233728 DEBUG oslo_concurrency.lockutils [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.975 233728 DEBUG nova.network.neutron [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.980 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Start _get_guest_xml network_info=[{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.985 233728 WARNING nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.991 233728 DEBUG nova.virt.libvirt.host [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.991 233728 DEBUG nova.virt.libvirt.host [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.998 233728 DEBUG nova.virt.libvirt.host [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:34 np0005539552 nova_compute[233724]: 2025-11-29 08:27:34.998 233728 DEBUG nova.virt.libvirt.host [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.000 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.000 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.000 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.001 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.001 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.001 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.001 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.002 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.002 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.002 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.003 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.003 233728 DEBUG nova.virt.hardware [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.007 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:35.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2581519450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.404 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.410 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.463 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.482 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.482 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3025774162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.519 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.553 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:35 np0005539552 nova_compute[233724]: 2025-11-29 08:27:35.558 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1231121903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:36.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.254 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.256 233728 DEBUG nova.virt.libvirt.vif [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-474867473',display_name='tempest-ServerRescueNegativeTestJSON-server-474867473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-474867473',id=144,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIKkQ7Zazeo/dx6F4Eu6MN8OAjR4BxckLM+7ouW/olmDfJC62bOHNKRmGAyxOWHpYnYgRnTecW30ZoVhQUqa4XTjBKJkd20WTjX5TvwIkgUKRgDOuqdsmup3NfferXDEOw==',key_name='tempest-keypair-472027593',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-knv00qym',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='283f8136265e4425a5a31f840935b9ab',uuid=d6363749-92b8-41e7-860c-63dc695390e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.257 233728 DEBUG nova.network.os_vif_util [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.258 233728 DEBUG nova.network.os_vif_util [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.259 233728 DEBUG nova.objects.instance [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'pci_devices' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.280 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <uuid>d6363749-92b8-41e7-860c-63dc695390e4</uuid>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <name>instance-00000090</name>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-474867473</nova:name>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:27:34</nova:creationTime>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:user uuid="283f8136265e4425a5a31f840935b9ab">tempest-ServerRescueNegativeTestJSON-2045177058-project-member</nova:user>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:project uuid="ea7b24ea9d7b4d239b4741634ac3f10c">tempest-ServerRescueNegativeTestJSON-2045177058</nova:project>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <nova:port uuid="a6621efc-2904-4858-9a98-9c441a64d2ff">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <entry name="serial">d6363749-92b8-41e7-860c-63dc695390e4</entry>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <entry name="uuid">d6363749-92b8-41e7-860c-63dc695390e4</entry>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d6363749-92b8-41e7-860c-63dc695390e4_disk">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d6363749-92b8-41e7-860c-63dc695390e4_disk.config">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:ba:8a:8e"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <target dev="tapa6621efc-29"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/console.log" append="off"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:27:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:27:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:27:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:27:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.281 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Preparing to wait for external event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.282 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.282 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.283 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.283 233728 DEBUG nova.virt.libvirt.vif [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-474867473',display_name='tempest-ServerRescueNegativeTestJSON-server-474867473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-474867473',id=144,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIKkQ7Zazeo/dx6F4Eu6MN8OAjR4BxckLM+7ouW/olmDfJC62bOHNKRmGAyxOWHpYnYgRnTecW30ZoVhQUqa4XTjBKJkd20WTjX5TvwIkgUKRgDOuqdsmup3NfferXDEOw==',key_name='tempest-keypair-472027593',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-knv00qym',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='283f8136265e4425a5a31f840935b9ab',uuid=d6363749-92b8-41e7-860c-63dc695390e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.284 233728 DEBUG nova.network.os_vif_util [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.284 233728 DEBUG nova.network.os_vif_util [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.285 233728 DEBUG os_vif [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.286 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.286 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.286 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.290 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.290 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6621efc-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.291 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6621efc-29, col_values=(('external_ids', {'iface-id': 'a6621efc-2904-4858-9a98-9c441a64d2ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:8a:8e', 'vm-uuid': 'd6363749-92b8-41e7-860c-63dc695390e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.292 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:36 np0005539552 NetworkManager[48926]: <info>  [1764404856.2935] manager: (tapa6621efc-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.295 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.304 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.304 233728 INFO os_vif [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29')#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.566 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.567 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.567 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:ba:8a:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.567 233728 INFO nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Using config drive#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.594 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:36 np0005539552 nova_compute[233724]: 2025-11-29 08:27:36.599 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.185 233728 INFO nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Creating config drive at /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.198 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcquk7iyy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.337 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcquk7iyy" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.368 233728 DEBUG nova.storage.rbd_utils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.372 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config d6363749-92b8-41e7-860c-63dc695390e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.560 233728 DEBUG oslo_concurrency.processutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config d6363749-92b8-41e7-860c-63dc695390e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.561 233728 INFO nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Deleting local config drive /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config because it was imported into RBD.#033[00m
Nov 29 03:27:37 np0005539552 kernel: tapa6621efc-29: entered promiscuous mode
Nov 29 03:27:37 np0005539552 NetworkManager[48926]: <info>  [1764404857.6065] manager: (tapa6621efc-29): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.607 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:37Z|00653|binding|INFO|Claiming lport a6621efc-2904-4858-9a98-9c441a64d2ff for this chassis.
Nov 29 03:27:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:37Z|00654|binding|INFO|a6621efc-2904-4858-9a98-9c441a64d2ff: Claiming fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.614 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.616 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.617 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:27:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:37Z|00655|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff ovn-installed in OVS
Nov 29 03:27:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:37Z|00656|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff up in Southbound
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.625 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.634 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bd21ad54-0dfb-41c7-a549-0b3f86340ebf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:37 np0005539552 systemd-udevd[294595]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:37 np0005539552 systemd-machined[196379]: New machine qemu-66-instance-00000090.
Nov 29 03:27:37 np0005539552 NetworkManager[48926]: <info>  [1764404857.6521] device (tapa6621efc-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:37 np0005539552 NetworkManager[48926]: <info>  [1764404857.6528] device (tapa6621efc-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:37 np0005539552 systemd[1]: Started Virtual Machine qemu-66-instance-00000090.
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.666 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[099ac412-0739-4c53-b7d7-78792afb5a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.669 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6a90f625-148f-4cd7-be83-67865b69d929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.697 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[92d5afb5-f0ec-4fad-9256-42bd36dd0d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.714 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[85f01bea-5ac3-4ba8-8445-14d7f3e976e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294606, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.728 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbce796-5a14-4e1d-af3a-b90deab09b1c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294609, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294609, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.730 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.731 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:37 np0005539552 nova_compute[233724]: 2025-11-29 08:27:37.732 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.734 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.735 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.735 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:37.735 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:38.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.484 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.485 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.485 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:27:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.564 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404858.563792, d6363749-92b8-41e7-860c-63dc695390e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.565 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.587 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.591 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404858.5644295, d6363749-92b8-41e7-860c-63dc695390e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.591 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.609 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.612 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.630 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.904 233728 DEBUG nova.compute.manager [req-50025490-40c4-4cec-957e-ff7af720188b req-b06d9a45-7fa4-420d-8bd1-3e4c83d35b1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.904 233728 DEBUG oslo_concurrency.lockutils [req-50025490-40c4-4cec-957e-ff7af720188b req-b06d9a45-7fa4-420d-8bd1-3e4c83d35b1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.904 233728 DEBUG oslo_concurrency.lockutils [req-50025490-40c4-4cec-957e-ff7af720188b req-b06d9a45-7fa4-420d-8bd1-3e4c83d35b1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.905 233728 DEBUG oslo_concurrency.lockutils [req-50025490-40c4-4cec-957e-ff7af720188b req-b06d9a45-7fa4-420d-8bd1-3e4c83d35b1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.905 233728 DEBUG nova.compute.manager [req-50025490-40c4-4cec-957e-ff7af720188b req-b06d9a45-7fa4-420d-8bd1-3e4c83d35b1c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Processing event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.906 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.909 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404858.9090104, d6363749-92b8-41e7-860c-63dc695390e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.909 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.911 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.912 233728 DEBUG nova.network.neutron [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updated VIF entry in instance network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.913 233728 DEBUG nova.network.neutron [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.916 233728 INFO nova.virt.libvirt.driver [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance spawned successfully.#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.916 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.938 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.943 233728 DEBUG oslo_concurrency.lockutils [req-a1614c80-3989-4488-9b6e-6a24b9bc2d40 req-91402112-2102-4116-b7d5-ddd6fc8ede47 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.946 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.950 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.950 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.950 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.951 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.951 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.951 233728 DEBUG nova.virt.libvirt.driver [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:38 np0005539552 nova_compute[233724]: 2025-11-29 08:27:38.978 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:39 np0005539552 nova_compute[233724]: 2025-11-29 08:27:39.016 233728 INFO nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Took 7.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:27:39 np0005539552 nova_compute[233724]: 2025-11-29 08:27:39.016 233728 DEBUG nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:39 np0005539552 nova_compute[233724]: 2025-11-29 08:27:39.276 233728 INFO nova.compute.manager [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Took 8.27 seconds to build instance.#033[00m
Nov 29 03:27:39 np0005539552 nova_compute[233724]: 2025-11-29 08:27:39.383 233728 DEBUG oslo_concurrency.lockutils [None req-b1f01fe0-ddb5-4949-9378-746c62321847 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:39 np0005539552 nova_compute[233724]: 2025-11-29 08:27:39.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:40 np0005539552 nova_compute[233724]: 2025-11-29 08:27:40.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:40 np0005539552 nova_compute[233724]: 2025-11-29 08:27:40.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.023 233728 DEBUG nova.compute.manager [req-6bde91e1-c401-49ac-90f9-14fd49a695d6 req-600ead18-9c1a-488c-b339-34ad15c1f834 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.023 233728 DEBUG oslo_concurrency.lockutils [req-6bde91e1-c401-49ac-90f9-14fd49a695d6 req-600ead18-9c1a-488c-b339-34ad15c1f834 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.024 233728 DEBUG oslo_concurrency.lockutils [req-6bde91e1-c401-49ac-90f9-14fd49a695d6 req-600ead18-9c1a-488c-b339-34ad15c1f834 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.024 233728 DEBUG oslo_concurrency.lockutils [req-6bde91e1-c401-49ac-90f9-14fd49a695d6 req-600ead18-9c1a-488c-b339-34ad15c1f834 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.024 233728 DEBUG nova.compute.manager [req-6bde91e1-c401-49ac-90f9-14fd49a695d6 req-600ead18-9c1a-488c-b339-34ad15c1f834 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.024 233728 WARNING nova.compute.manager [req-6bde91e1-c401-49ac-90f9-14fd49a695d6 req-600ead18-9c1a-488c-b339-34ad15c1f834 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.293 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.598 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.980 233728 DEBUG oslo_concurrency.lockutils [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.981 233728 DEBUG oslo_concurrency.lockutils [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:41 np0005539552 nova_compute[233724]: 2025-11-29 08:27:41.992 233728 INFO nova.compute.manager [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Detaching volume 8a80d4c0-d642-46d8-a891-79c20b955b6a#033[00m
Nov 29 03:27:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.153 233728 INFO nova.virt.block_device [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Attempting to driver detach volume 8a80d4c0-d642-46d8-a891-79c20b955b6a from mountpoint /dev/vdb#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.165 233728 DEBUG nova.virt.libvirt.driver [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Attempting to detach device vdb from instance 50daa6f5-6598-439f-a542-38e8ae7aded0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.166 233728 DEBUG nova.virt.libvirt.guest [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-8a80d4c0-d642-46d8-a891-79c20b955b6a">
Nov 29 03:27:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <serial>8a80d4c0-d642-46d8-a891-79c20b955b6a</serial>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:27:42 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.177 233728 INFO nova.virt.libvirt.driver [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully detached device vdb from instance 50daa6f5-6598-439f-a542-38e8ae7aded0 from the persistent domain config.#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.178 233728 DEBUG nova.virt.libvirt.driver [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 50daa6f5-6598-439f-a542-38e8ae7aded0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.179 233728 DEBUG nova.virt.libvirt.guest [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-8a80d4c0-d642-46d8-a891-79c20b955b6a">
Nov 29 03:27:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <serial>8a80d4c0-d642-46d8-a891-79c20b955b6a</serial>
Nov 29 03:27:42 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:27:42 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:27:42 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:27:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.301 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764404862.3010657, 50daa6f5-6598-439f-a542-38e8ae7aded0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.304 233728 DEBUG nova.virt.libvirt.driver [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 50daa6f5-6598-439f-a542-38e8ae7aded0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.306 233728 INFO nova.virt.libvirt.driver [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully detached device vdb from instance 50daa6f5-6598-439f-a542-38e8ae7aded0 from the live domain config.#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.484 233728 DEBUG nova.objects.instance [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:42 np0005539552 nova_compute[233724]: 2025-11-29 08:27:42.584 233728 DEBUG oslo_concurrency.lockutils [None req-91eec934-26ad-4bf5-9d0e-d48f87d208f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:43 np0005539552 nova_compute[233724]: 2025-11-29 08:27:43.267 233728 DEBUG nova.compute.manager [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:43 np0005539552 nova_compute[233724]: 2025-11-29 08:27:43.268 233728 DEBUG nova.compute.manager [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing instance network info cache due to event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:43 np0005539552 nova_compute[233724]: 2025-11-29 08:27:43.268 233728 DEBUG oslo_concurrency.lockutils [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:43 np0005539552 nova_compute[233724]: 2025-11-29 08:27:43.269 233728 DEBUG oslo_concurrency.lockutils [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:43 np0005539552 nova_compute[233724]: 2025-11-29 08:27:43.269 233728 DEBUG nova.network.neutron [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1317626881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:43.704 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:43.705 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:27:43 np0005539552 nova_compute[233724]: 2025-11-29 08:27:43.706 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 03:27:44 np0005539552 nova_compute[233724]: 2025-11-29 08:27:44.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:44 np0005539552 nova_compute[233724]: 2025-11-29 08:27:44.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:44 np0005539552 nova_compute[233724]: 2025-11-29 08:27:44.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:27:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:45 np0005539552 nova_compute[233724]: 2025-11-29 08:27:45.513 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:45 np0005539552 nova_compute[233724]: 2025-11-29 08:27:45.514 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:45 np0005539552 nova_compute[233724]: 2025-11-29 08:27:45.514 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:27:45 np0005539552 nova_compute[233724]: 2025-11-29 08:27:45.566 233728 DEBUG nova.network.neutron [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updated VIF entry in instance network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:45 np0005539552 nova_compute[233724]: 2025-11-29 08:27:45.567 233728 DEBUG nova.network.neutron [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:45 np0005539552 nova_compute[233724]: 2025-11-29 08:27:45.590 233728 DEBUG oslo_concurrency.lockutils [req-c7e7344d-f05a-4ca5-a810-6480dd57eea5 req-51112133-6bba-4a7a-9b9f-169743622d59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:46.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:46 np0005539552 nova_compute[233724]: 2025-11-29 08:27:46.296 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539552 nova_compute[233724]: 2025-11-29 08:27:46.600 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539552 nova_compute[233724]: 2025-11-29 08:27:46.868 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updating instance_info_cache with network_info: [{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:46 np0005539552 nova_compute[233724]: 2025-11-29 08:27:46.898 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:46 np0005539552 nova_compute[233724]: 2025-11-29 08:27:46.898 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:27:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:47.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.281 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.281 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.282 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.282 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.282 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.283 233728 INFO nova.compute.manager [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Terminating instance#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.284 233728 DEBUG nova.compute.manager [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:27:47 np0005539552 kernel: tap91d5abbf-fd (unregistering): left promiscuous mode
Nov 29 03:27:47 np0005539552 NetworkManager[48926]: <info>  [1764404867.3696] device (tap91d5abbf-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.373 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.377 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:47Z|00657|binding|INFO|Releasing lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 from this chassis (sb_readonly=0)
Nov 29 03:27:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:47Z|00658|binding|INFO|Setting lport 91d5abbf-fd67-487f-bfaa-448b1daa5272 down in Southbound
Nov 29 03:27:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:47Z|00659|binding|INFO|Removing iface tap91d5abbf-fd ovn-installed in OVS
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.384 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:14:96 10.100.0.4'], port_security=['fa:16:3e:71:14:96 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '50daa6f5-6598-439f-a542-38e8ae7aded0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b3bcb98f-0ed3-4c70-97d6-4df1974ced71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=91d5abbf-fd67-487f-bfaa-448b1daa5272) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.385 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 91d5abbf-fd67-487f-bfaa-448b1daa5272 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.387 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.391 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[feebca86-a51a-4fa9-b966-7a834d9ab11f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.392 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 29 03:27:47 np0005539552 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008a.scope: Consumed 13.936s CPU time.
Nov 29 03:27:47 np0005539552 systemd-machined[196379]: Machine qemu-65-instance-0000008a terminated.
Nov 29 03:27:47 np0005539552 podman[294763]: 2025-11-29 08:27:47.463774978 +0000 UTC m=+0.090075335 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 03:27:47 np0005539552 podman[294764]: 2025-11-29 08:27:47.469349918 +0000 UTC m=+0.064304771 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:47 np0005539552 podman[294765]: 2025-11-29 08:27:47.498061271 +0000 UTC m=+0.116484126 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:27:47 np0005539552 kernel: tap91d5abbf-fd: entered promiscuous mode
Nov 29 03:27:47 np0005539552 NetworkManager[48926]: <info>  [1764404867.5080] manager: (tap91d5abbf-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Nov 29 03:27:47 np0005539552 kernel: tap91d5abbf-fd (unregistering): left promiscuous mode
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.531 233728 INFO nova.virt.libvirt.driver [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Instance destroyed successfully.#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.531 233728 DEBUG nova.objects.instance [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid 50daa6f5-6598-439f-a542-38e8ae7aded0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:47 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [NOTICE]   (293835) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:47 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [NOTICE]   (293835) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:47 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [WARNING]  (293835) : Exiting Master process...
Nov 29 03:27:47 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [WARNING]  (293835) : Exiting Master process...
Nov 29 03:27:47 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [ALERT]    (293835) : Current worker (293837) exited with code 143 (Terminated)
Nov 29 03:27:47 np0005539552 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[293831]: [WARNING]  (293835) : All workers exited. Exiting... (0)
Nov 29 03:27:47 np0005539552 systemd[1]: libpod-d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64.scope: Deactivated successfully.
Nov 29 03:27:47 np0005539552 podman[294849]: 2025-11-29 08:27:47.557658324 +0000 UTC m=+0.050135510 container died d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:47 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:47 np0005539552 systemd[1]: var-lib-containers-storage-overlay-b0f7f06859b7627d298b7ae1938d91ec4aaf5acdefc0267de677c733b96dd326-merged.mount: Deactivated successfully.
Nov 29 03:27:47 np0005539552 podman[294849]: 2025-11-29 08:27:47.596345385 +0000 UTC m=+0.088822581 container cleanup d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:27:47 np0005539552 systemd[1]: libpod-conmon-d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64.scope: Deactivated successfully.
Nov 29 03:27:47 np0005539552 podman[294883]: 2025-11-29 08:27:47.662861075 +0000 UTC m=+0.043400649 container remove d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.668 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[675993cc-36a1-4589-96c3-7e330cd8768c]: (4, ('Sat Nov 29 08:27:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64)\nd51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64\nSat Nov 29 08:27:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (d51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64)\nd51c7df9fe7c9f2d2f190ec146da14164b985b87a38e726837cda5ed9d197c64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.670 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fb01d0fa-4788-4b02-a5ca-85e6eb037c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.671 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:47 np0005539552 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.672 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.678 233728 DEBUG nova.virt.libvirt.vif [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-419138925',display_name='tempest-ServerStableDeviceRescueTest-server-419138925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-419138925',id=138,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDT3yKh1OQM/APLMkF69FKCCA5nBzuP29507Q5By6a2JvA70+RYIsFvSoa6+Z7yspJ+R+0ak0hPbMHRp8sVSoGCRfzVOzgUhwEXFYk/q/u+LWAX+bPUz2Gc3J/eOzCbxKw==',key_name='tempest-keypair-1950933442',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-c9jn2vfu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='873186539acb4bf9b90513e0e1beb56f',uuid=50daa6f5-6598-439f-a542-38e8ae7aded0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.679 233728 DEBUG nova.network.os_vif_util [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "address": "fa:16:3e:71:14:96", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91d5abbf-fd", "ovs_interfaceid": "91d5abbf-fd67-487f-bfaa-448b1daa5272", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.680 233728 DEBUG nova.network.os_vif_util [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.681 233728 DEBUG os_vif [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.682 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.683 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91d5abbf-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.686 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.695 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba49db1-c8f0-4ded-8fef-38c00023c360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.696 233728 INFO os_vif [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:14:96,bridge_name='br-int',has_traffic_filtering=True,id=91d5abbf-fd67-487f-bfaa-448b1daa5272,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91d5abbf-fd')#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.709 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[772d02a5-31a9-40f1-b157-6dacc6edf8af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.711 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fe3e4f-52bf-4dbc-a3bb-1b08f739059a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.726 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbe4858-1d4d-4cf9-a988-23eb89a7740f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787575, 'reachable_time': 20101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294914, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539552 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.732 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:47.732 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[41ad6cfa-c99a-4e54-bd79-3ceb0d9705f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:48 np0005539552 nova_compute[233724]: 2025-11-29 08:27:47.999 233728 DEBUG nova.compute.manager [req-0e9c07ad-f1fa-42ae-8f2a-5d50e4982b67 req-a710d970-3f12-4751-a93e-dade4ad56af2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:48 np0005539552 nova_compute[233724]: 2025-11-29 08:27:48.000 233728 DEBUG oslo_concurrency.lockutils [req-0e9c07ad-f1fa-42ae-8f2a-5d50e4982b67 req-a710d970-3f12-4751-a93e-dade4ad56af2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:48 np0005539552 nova_compute[233724]: 2025-11-29 08:27:48.000 233728 DEBUG oslo_concurrency.lockutils [req-0e9c07ad-f1fa-42ae-8f2a-5d50e4982b67 req-a710d970-3f12-4751-a93e-dade4ad56af2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:48 np0005539552 nova_compute[233724]: 2025-11-29 08:27:48.000 233728 DEBUG oslo_concurrency.lockutils [req-0e9c07ad-f1fa-42ae-8f2a-5d50e4982b67 req-a710d970-3f12-4751-a93e-dade4ad56af2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:48 np0005539552 nova_compute[233724]: 2025-11-29 08:27:48.000 233728 DEBUG nova.compute.manager [req-0e9c07ad-f1fa-42ae-8f2a-5d50e4982b67 req-a710d970-3f12-4751-a93e-dade4ad56af2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:48 np0005539552 nova_compute[233724]: 2025-11-29 08:27:48.000 233728 DEBUG nova.compute.manager [req-0e9c07ad-f1fa-42ae-8f2a-5d50e4982b67 req-a710d970-3f12-4751-a93e-dade4ad56af2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-unplugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:27:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:48.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:49.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:50.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539552 nova_compute[233724]: 2025-11-29 08:27:50.090 233728 DEBUG nova.compute.manager [req-d1d4a519-383b-4dcd-b455-b0be05e2d8ef req-43fa2cbc-a2f1-4f68-8d25-87a76ecf9ede 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:50 np0005539552 nova_compute[233724]: 2025-11-29 08:27:50.090 233728 DEBUG oslo_concurrency.lockutils [req-d1d4a519-383b-4dcd-b455-b0be05e2d8ef req-43fa2cbc-a2f1-4f68-8d25-87a76ecf9ede 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:50 np0005539552 nova_compute[233724]: 2025-11-29 08:27:50.091 233728 DEBUG oslo_concurrency.lockutils [req-d1d4a519-383b-4dcd-b455-b0be05e2d8ef req-43fa2cbc-a2f1-4f68-8d25-87a76ecf9ede 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:50 np0005539552 nova_compute[233724]: 2025-11-29 08:27:50.091 233728 DEBUG oslo_concurrency.lockutils [req-d1d4a519-383b-4dcd-b455-b0be05e2d8ef req-43fa2cbc-a2f1-4f68-8d25-87a76ecf9ede 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:50 np0005539552 nova_compute[233724]: 2025-11-29 08:27:50.091 233728 DEBUG nova.compute.manager [req-d1d4a519-383b-4dcd-b455-b0be05e2d8ef req-43fa2cbc-a2f1-4f68-8d25-87a76ecf9ede 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] No waiting events found dispatching network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:50 np0005539552 nova_compute[233724]: 2025-11-29 08:27:50.091 233728 WARNING nova.compute.manager [req-d1d4a519-383b-4dcd-b455-b0be05e2d8ef req-43fa2cbc-a2f1-4f68-8d25-87a76ecf9ede 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received unexpected event network-vif-plugged-91d5abbf-fd67-487f-bfaa-448b1daa5272 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:27:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:51.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:51 np0005539552 nova_compute[233724]: 2025-11-29 08:27:51.603 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:27:51.707 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:51 np0005539552 nova_compute[233724]: 2025-11-29 08:27:51.990 233728 INFO nova.virt.libvirt.driver [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Deleting instance files /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0_del#033[00m
Nov 29 03:27:51 np0005539552 nova_compute[233724]: 2025-11-29 08:27:51.991 233728 INFO nova.virt.libvirt.driver [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Deletion of /var/lib/nova/instances/50daa6f5-6598-439f-a542-38e8ae7aded0_del complete#033[00m
Nov 29 03:27:52 np0005539552 nova_compute[233724]: 2025-11-29 08:27:52.043 233728 INFO nova.compute.manager [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Took 4.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:27:52 np0005539552 nova_compute[233724]: 2025-11-29 08:27:52.043 233728 DEBUG oslo.service.loopingcall [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:27:52 np0005539552 nova_compute[233724]: 2025-11-29 08:27:52.044 233728 DEBUG nova.compute.manager [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:27:52 np0005539552 nova_compute[233724]: 2025-11-29 08:27:52.044 233728 DEBUG nova.network.neutron [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:27:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:52.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 03:27:52 np0005539552 nova_compute[233724]: 2025-11-29 08:27:52.685 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:52Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:27:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:27:52Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:27:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.685 233728 DEBUG nova.compute.manager [req-5bc8a7b1-585b-4861-8344-ef5f22283683 req-c09f91c4-fba6-48e6-b0a2-55eae3395d2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Received event network-vif-deleted-91d5abbf-fd67-487f-bfaa-448b1daa5272 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.686 233728 INFO nova.compute.manager [req-5bc8a7b1-585b-4861-8344-ef5f22283683 req-c09f91c4-fba6-48e6-b0a2-55eae3395d2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Neutron deleted interface 91d5abbf-fd67-487f-bfaa-448b1daa5272; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.686 233728 DEBUG nova.network.neutron [req-5bc8a7b1-585b-4861-8344-ef5f22283683 req-c09f91c4-fba6-48e6-b0a2-55eae3395d2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.688 233728 DEBUG nova.network.neutron [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.744 233728 DEBUG nova.compute.manager [req-5bc8a7b1-585b-4861-8344-ef5f22283683 req-c09f91c4-fba6-48e6-b0a2-55eae3395d2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Detach interface failed, port_id=91d5abbf-fd67-487f-bfaa-448b1daa5272, reason: Instance 50daa6f5-6598-439f-a542-38e8ae7aded0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.751 233728 INFO nova.compute.manager [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Took 1.71 seconds to deallocate network for instance.#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.891 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:53 np0005539552 nova_compute[233724]: 2025-11-29 08:27:53.892 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.007 233728 DEBUG oslo_concurrency.processutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3501959972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.466 233728 DEBUG oslo_concurrency.processutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.472 233728 DEBUG nova.compute.provider_tree [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.493 233728 DEBUG nova.scheduler.client.report [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.521 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.543 233728 INFO nova.scheduler.client.report [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Deleted allocations for instance 50daa6f5-6598-439f-a542-38e8ae7aded0#033[00m
Nov 29 03:27:54 np0005539552 nova_compute[233724]: 2025-11-29 08:27:54.599 233728 DEBUG oslo_concurrency.lockutils [None req-f94e5d34-2fe1-420e-9245-4b53ad3abf17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "50daa6f5-6598-439f-a542-38e8ae7aded0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:56 np0005539552 nova_compute[233724]: 2025-11-29 08:27:56.606 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:57.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:57 np0005539552 nova_compute[233724]: 2025-11-29 08:27:57.688 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:58.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.033 233728 DEBUG oslo_concurrency.lockutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.033 233728 DEBUG oslo_concurrency.lockutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.055 233728 DEBUG nova.objects.instance [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.095 233728 DEBUG oslo_concurrency.lockutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:27:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.414 233728 DEBUG oslo_concurrency.lockutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.415 233728 DEBUG oslo_concurrency.lockutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.416 233728 INFO nova.compute.manager [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Attaching volume 213d91c6-0ee8-47c3-965c-92c80077e9ee to /dev/vdb#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.555 233728 DEBUG os_brick.utils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.558 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.572 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.572 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1249e3d-45f0-47a2-97e7-fbd3c30cbd92]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.575 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.587 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.587 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[8858712f-a118-4ba7-9679-b90230325c03]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.589 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.598 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.599 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf4dc2c-aaa9-42aa-965e-94bc0d21cab8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.601 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc9a495-d0dd-43e6-8e18-a32e88899809]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.601 233728 DEBUG oslo_concurrency.processutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.633 233728 DEBUG oslo_concurrency.processutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.635 233728 DEBUG os_brick.initiator.connectors.lightos [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.635 233728 DEBUG os_brick.initiator.connectors.lightos [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.636 233728 DEBUG os_brick.initiator.connectors.lightos [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.636 233728 DEBUG os_brick.utils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:27:59 np0005539552 nova_compute[233724]: 2025-11-29 08:27:59.636 233728 DEBUG nova.virt.block_device [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating existing volume attachment record: 8ad5f0ac-9742-4cba-86de-0a554ab0d172 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:28:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:00.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923280330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.466232) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880466336, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2495, "num_deletes": 255, "total_data_size": 5674041, "memory_usage": 5753360, "flush_reason": "Manual Compaction"}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880494996, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3687227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51925, "largest_seqno": 54415, "table_properties": {"data_size": 3676999, "index_size": 6466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22074, "raw_average_key_size": 20, "raw_value_size": 3656249, "raw_average_value_size": 3475, "num_data_blocks": 279, "num_entries": 1052, "num_filter_entries": 1052, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404695, "oldest_key_time": 1764404695, "file_creation_time": 1764404880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 28826 microseconds, and 15455 cpu microseconds.
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.495056) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3687227 bytes OK
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.495081) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.497355) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.497368) EVENT_LOG_v1 {"time_micros": 1764404880497364, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.497383) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5662897, prev total WAL file size 5662897, number of live WAL files 2.
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.498906) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3600KB)], [102(10MB)]
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880498971, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 15009810, "oldest_snapshot_seqno": -1}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8705 keys, 13127134 bytes, temperature: kUnknown
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880618813, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 13127134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13068549, "index_size": 35689, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 225658, "raw_average_key_size": 25, "raw_value_size": 12912952, "raw_average_value_size": 1483, "num_data_blocks": 1397, "num_entries": 8705, "num_filter_entries": 8705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764404880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.619092) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13127134 bytes
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.621388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.2 rd, 109.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 9233, records dropped: 528 output_compression: NoCompression
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.621404) EVENT_LOG_v1 {"time_micros": 1764404880621397, "job": 64, "event": "compaction_finished", "compaction_time_micros": 119898, "compaction_time_cpu_micros": 56753, "output_level": 6, "num_output_files": 1, "total_output_size": 13127134, "num_input_records": 9233, "num_output_records": 8705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880622125, "job": 64, "event": "table_file_deletion", "file_number": 104}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880624182, "job": 64, "event": "table_file_deletion", "file_number": 102}
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.498750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.624259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.624357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.624361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.624363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:28:00.624365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.630 233728 DEBUG nova.objects.instance [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.655 233728 DEBUG nova.virt.libvirt.driver [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Attempting to attach volume 213d91c6-0ee8-47c3-965c-92c80077e9ee with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.657 233728 DEBUG nova.virt.libvirt.guest [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-213d91c6-0ee8-47c3-965c-92c80077e9ee">
Nov 29 03:28:00 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:28:00 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:00 np0005539552 nova_compute[233724]:  <serial>213d91c6-0ee8-47c3-965c-92c80077e9ee</serial>
Nov 29 03:28:00 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:28:00 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.766 233728 DEBUG nova.virt.libvirt.driver [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.766 233728 DEBUG nova.virt.libvirt.driver [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.766 233728 DEBUG nova.virt.libvirt.driver [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.767 233728 DEBUG nova.virt.libvirt.driver [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:ba:8a:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3342655949' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:28:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3342655949' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:28:00 np0005539552 nova_compute[233724]: 2025-11-29 08:28:00.926 233728 DEBUG oslo_concurrency.lockutils [None req-61a833e6-fcec-4a89-996b-24504ee9927c 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:01.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:01 np0005539552 nova_compute[233724]: 2025-11-29 08:28:01.609 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:02.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.166 233728 INFO nova.compute.manager [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Rescuing#033[00m
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.166 233728 DEBUG oslo_concurrency.lockutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.167 233728 DEBUG oslo_concurrency.lockutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.167 233728 DEBUG nova.network.neutron [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.527 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404867.5261242, 50daa6f5-6598-439f-a542-38e8ae7aded0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.528 233728 INFO nova.compute.manager [-] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.555 233728 DEBUG nova.compute.manager [None req-5877dcb7-6b8e-41ac-86f0-efc0032267ea - - - - - -] [instance: 50daa6f5-6598-439f-a542-38e8ae7aded0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:02 np0005539552 nova_compute[233724]: 2025-11-29 08:28:02.689 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:03 np0005539552 nova_compute[233724]: 2025-11-29 08:28:03.932 233728 DEBUG nova.network.neutron [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:03 np0005539552 nova_compute[233724]: 2025-11-29 08:28:03.951 233728 DEBUG oslo_concurrency.lockutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:04.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:04 np0005539552 nova_compute[233724]: 2025-11-29 08:28:04.212 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:28:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:06.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:06 np0005539552 kernel: tapa6621efc-29 (unregistering): left promiscuous mode
Nov 29 03:28:06 np0005539552 NetworkManager[48926]: <info>  [1764404886.5206] device (tapa6621efc-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.529 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:06Z|00660|binding|INFO|Releasing lport a6621efc-2904-4858-9a98-9c441a64d2ff from this chassis (sb_readonly=0)
Nov 29 03:28:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:06Z|00661|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff down in Southbound
Nov 29 03:28:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:06Z|00662|binding|INFO|Removing iface tapa6621efc-29 ovn-installed in OVS
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.533 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.539 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.541 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.542 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.560 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[41cc3df5-d748-4c93-be48-c793fb1847b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.598 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[efe49845-203f-4094-8431-24a742969830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:06 np0005539552 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.601 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[23f36f42-5a7c-4962-a537-f6d6f931edc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:06 np0005539552 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000090.scope: Consumed 14.863s CPU time.
Nov 29 03:28:06 np0005539552 systemd-machined[196379]: Machine qemu-66-instance-00000090 terminated.
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.610 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.627 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[60103d82-8541-41f9-a30b-a432b42c5e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.645 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[68a70a58-a28a-4f82-b7ca-b6d3f98bfe48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295041, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.661 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d80efa-98da-44de-88bb-18ade361b8c6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295042, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295042, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.663 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.664 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.669 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.669 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.669 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.669 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:06.670 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.746 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.752 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.814 233728 DEBUG nova.compute.manager [req-b0c3b81a-8a07-468d-94da-bc347165ea67 req-56b6be80-f7e1-4d1e-93e9-9d9dc9bb863a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-unplugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.815 233728 DEBUG oslo_concurrency.lockutils [req-b0c3b81a-8a07-468d-94da-bc347165ea67 req-56b6be80-f7e1-4d1e-93e9-9d9dc9bb863a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.815 233728 DEBUG oslo_concurrency.lockutils [req-b0c3b81a-8a07-468d-94da-bc347165ea67 req-56b6be80-f7e1-4d1e-93e9-9d9dc9bb863a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.815 233728 DEBUG oslo_concurrency.lockutils [req-b0c3b81a-8a07-468d-94da-bc347165ea67 req-56b6be80-f7e1-4d1e-93e9-9d9dc9bb863a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.815 233728 DEBUG nova.compute.manager [req-b0c3b81a-8a07-468d-94da-bc347165ea67 req-56b6be80-f7e1-4d1e-93e9-9d9dc9bb863a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-unplugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:06 np0005539552 nova_compute[233724]: 2025-11-29 08:28:06.816 233728 WARNING nova.compute.manager [req-b0c3b81a-8a07-468d-94da-bc347165ea67 req-56b6be80-f7e1-4d1e-93e9-9d9dc9bb863a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-unplugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:28:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:07.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.230 233728 INFO nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.235 233728 INFO nova.virt.libvirt.driver [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance destroyed successfully.#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.235 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'numa_topology' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.255 233728 INFO nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Attempting rescue#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.256 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.259 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.259 233728 INFO nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Creating image(s)#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.278 233728 DEBUG nova.storage.rbd_utils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.282 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'trusted_certs' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.314 233728 DEBUG nova.storage.rbd_utils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.336 233728 DEBUG nova.storage.rbd_utils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.339 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.403 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.404 233728 DEBUG oslo_concurrency.lockutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.405 233728 DEBUG oslo_concurrency.lockutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.405 233728 DEBUG oslo_concurrency.lockutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.427 233728 DEBUG nova.storage.rbd_utils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.431 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 03:28:07 np0005539552 nova_compute[233724]: 2025-11-29 08:28:07.691 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:08.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:09.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.638 233728 DEBUG nova.compute.manager [req-6ff54fc6-6f8f-4506-bb64-3d03b02a9b49 req-40661b3a-23f9-4141-929d-7a4ca0cafc30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.640 233728 DEBUG oslo_concurrency.lockutils [req-6ff54fc6-6f8f-4506-bb64-3d03b02a9b49 req-40661b3a-23f9-4141-929d-7a4ca0cafc30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.641 233728 DEBUG oslo_concurrency.lockutils [req-6ff54fc6-6f8f-4506-bb64-3d03b02a9b49 req-40661b3a-23f9-4141-929d-7a4ca0cafc30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.641 233728 DEBUG oslo_concurrency.lockutils [req-6ff54fc6-6f8f-4506-bb64-3d03b02a9b49 req-40661b3a-23f9-4141-929d-7a4ca0cafc30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.641 233728 DEBUG nova.compute.manager [req-6ff54fc6-6f8f-4506-bb64-3d03b02a9b49 req-40661b3a-23f9-4141-929d-7a4ca0cafc30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.641 233728 WARNING nova.compute.manager [req-6ff54fc6-6f8f-4506-bb64-3d03b02a9b49 req-40661b3a-23f9-4141-929d-7a4ca0cafc30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.713 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.714 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'migration_context' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.735 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.736 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Start _get_guest_xml network_info=[{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:ba:8a:8e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4873db8c-b414-4e95-acd9-77caabebe722', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.736 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'resources' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.758 233728 WARNING nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.763 233728 DEBUG nova.virt.libvirt.host [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.764 233728 DEBUG nova.virt.libvirt.host [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.768 233728 DEBUG nova.virt.libvirt.host [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.768 233728 DEBUG nova.virt.libvirt.host [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.769 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.770 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.770 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.770 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.771 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.771 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.771 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.771 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.772 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.772 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.772 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.772 233728 DEBUG nova.virt.hardware [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.772 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'vcpu_model' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:09 np0005539552 nova_compute[233724]: 2025-11-29 08:28:09.795 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3941839873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:10 np0005539552 nova_compute[233724]: 2025-11-29 08:28:10.244 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:10 np0005539552 nova_compute[233724]: 2025-11-29 08:28:10.246 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2801500315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:10 np0005539552 nova_compute[233724]: 2025-11-29 08:28:10.765 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:10 np0005539552 nova_compute[233724]: 2025-11-29 08:28:10.766 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3097167693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:11.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.264 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.266 233728 DEBUG nova.virt.libvirt.vif [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-474867473',display_name='tempest-ServerRescueNegativeTestJSON-server-474867473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-474867473',id=144,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIKkQ7Zazeo/dx6F4Eu6MN8OAjR4BxckLM+7ouW/olmDfJC62bOHNKRmGAyxOWHpYnYgRnTecW30ZoVhQUqa4XTjBKJkd20WTjX5TvwIkgUKRgDOuqdsmup3NfferXDEOw==',key_name='tempest-keypair-472027593',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-knv00qym',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='283f8136265e4425a5a31f840935b9ab',uuid=d6363749-92b8-41e7-860c-63dc695390e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:ba:8a:8e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.267 233728 DEBUG nova.network.os_vif_util [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "vif_mac": "fa:16:3e:ba:8a:8e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.268 233728 DEBUG nova.network.os_vif_util [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.269 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'pci_devices' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.288 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <uuid>d6363749-92b8-41e7-860c-63dc695390e4</uuid>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <name>instance-00000090</name>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-474867473</nova:name>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:28:09</nova:creationTime>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:user uuid="283f8136265e4425a5a31f840935b9ab">tempest-ServerRescueNegativeTestJSON-2045177058-project-member</nova:user>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:project uuid="ea7b24ea9d7b4d239b4741634ac3f10c">tempest-ServerRescueNegativeTestJSON-2045177058</nova:project>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <nova:port uuid="a6621efc-2904-4858-9a98-9c441a64d2ff">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <entry name="serial">d6363749-92b8-41e7-860c-63dc695390e4</entry>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <entry name="uuid">d6363749-92b8-41e7-860c-63dc695390e4</entry>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d6363749-92b8-41e7-860c-63dc695390e4_disk.rescue">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d6363749-92b8-41e7-860c-63dc695390e4_disk">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d6363749-92b8-41e7-860c-63dc695390e4_disk.config.rescue">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:ba:8a:8e"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <target dev="tapa6621efc-29"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/console.log" append="off"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:28:11 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:28:11 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:28:11 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:28:11 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.296 233728 INFO nova.virt.libvirt.driver [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance destroyed successfully.#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.377 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.377 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.378 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.378 233728 DEBUG nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] No VIF found with MAC fa:16:3e:ba:8a:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.379 233728 INFO nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Using config drive#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.417 233728 DEBUG nova.storage.rbd_utils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.443 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'ec2_ids' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.486 233728 DEBUG nova.objects.instance [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'keypairs' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.611 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:11Z|00663|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539552 nova_compute[233724]: 2025-11-29 08:28:11.810 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:12 np0005539552 nova_compute[233724]: 2025-11-29 08:28:12.694 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:13.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.341 233728 INFO nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Creating config drive at /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config.rescue#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.346 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparvau9z4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.478 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparvau9z4" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.514 233728 DEBUG nova.storage.rbd_utils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] rbd image d6363749-92b8-41e7-860c-63dc695390e4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.518 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config.rescue d6363749-92b8-41e7-860c-63dc695390e4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.718 233728 DEBUG oslo_concurrency.processutils [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config.rescue d6363749-92b8-41e7-860c-63dc695390e4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.720 233728 INFO nova.virt.libvirt.driver [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Deleting local config drive /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:28:13 np0005539552 kernel: tapa6621efc-29: entered promiscuous mode
Nov 29 03:28:13 np0005539552 NetworkManager[48926]: <info>  [1764404893.7803] manager: (tapa6621efc-29): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Nov 29 03:28:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:13Z|00664|binding|INFO|Claiming lport a6621efc-2904-4858-9a98-9c441a64d2ff for this chassis.
Nov 29 03:28:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:13Z|00665|binding|INFO|a6621efc-2904-4858-9a98-9c441a64d2ff: Claiming fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.794 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.795 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.796 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:13 np0005539552 systemd-udevd[295295]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:13 np0005539552 NetworkManager[48926]: <info>  [1764404893.8159] device (tapa6621efc-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:28:13 np0005539552 systemd-machined[196379]: New machine qemu-67-instance-00000090.
Nov 29 03:28:13 np0005539552 NetworkManager[48926]: <info>  [1764404893.8173] device (tapa6621efc-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.821 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4d3cad-3974-40b7-b7d2-c32dc95fda99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:13 np0005539552 systemd[1]: Started Virtual Machine qemu-67-instance-00000090.
Nov 29 03:28:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:13Z|00666|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff ovn-installed in OVS
Nov 29 03:28:13 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:13Z|00667|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff up in Southbound
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.829 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.848 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a094b73d-5cd7-48e4-b4b2-e55b84687fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.851 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb1824f-5e81-4375-b1ab-75f66e042caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.885 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[eb05e83d-59cd-4284-9cc5-cb620d357012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.905 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4c670928-6623-4010-be62-15a77ceda49a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295307, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.923 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f904ccfc-dd74-4ca8-a68d-c9e9e6132038]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295309, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295309, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.925 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:13 np0005539552 nova_compute[233724]: 2025-11-29 08:28:13.927 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.929 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.929 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.930 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:13.930 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:14 np0005539552 nova_compute[233724]: 2025-11-29 08:28:14.607 233728 DEBUG nova.compute.manager [req-b9abda1a-e6ba-4a9f-a259-128d3b2ecf9c req-2ad3af69-4bfb-4fbf-a858-811090aaf8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:14 np0005539552 nova_compute[233724]: 2025-11-29 08:28:14.609 233728 DEBUG oslo_concurrency.lockutils [req-b9abda1a-e6ba-4a9f-a259-128d3b2ecf9c req-2ad3af69-4bfb-4fbf-a858-811090aaf8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:14 np0005539552 nova_compute[233724]: 2025-11-29 08:28:14.610 233728 DEBUG oslo_concurrency.lockutils [req-b9abda1a-e6ba-4a9f-a259-128d3b2ecf9c req-2ad3af69-4bfb-4fbf-a858-811090aaf8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:14 np0005539552 nova_compute[233724]: 2025-11-29 08:28:14.611 233728 DEBUG oslo_concurrency.lockutils [req-b9abda1a-e6ba-4a9f-a259-128d3b2ecf9c req-2ad3af69-4bfb-4fbf-a858-811090aaf8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:14 np0005539552 nova_compute[233724]: 2025-11-29 08:28:14.611 233728 DEBUG nova.compute.manager [req-b9abda1a-e6ba-4a9f-a259-128d3b2ecf9c req-2ad3af69-4bfb-4fbf-a858-811090aaf8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:14 np0005539552 nova_compute[233724]: 2025-11-29 08:28:14.612 233728 WARNING nova.compute.manager [req-b9abda1a-e6ba-4a9f-a259-128d3b2ecf9c req-2ad3af69-4bfb-4fbf-a858-811090aaf8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:28:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:15.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.613 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.737 233728 DEBUG nova.compute.manager [req-bb0716f6-dd35-4511-afa3-4cee5f7a2d50 req-c40ff07c-f390-48c5-8d3b-933f1d305815 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.737 233728 DEBUG oslo_concurrency.lockutils [req-bb0716f6-dd35-4511-afa3-4cee5f7a2d50 req-c40ff07c-f390-48c5-8d3b-933f1d305815 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.737 233728 DEBUG oslo_concurrency.lockutils [req-bb0716f6-dd35-4511-afa3-4cee5f7a2d50 req-c40ff07c-f390-48c5-8d3b-933f1d305815 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.737 233728 DEBUG oslo_concurrency.lockutils [req-bb0716f6-dd35-4511-afa3-4cee5f7a2d50 req-c40ff07c-f390-48c5-8d3b-933f1d305815 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.738 233728 DEBUG nova.compute.manager [req-bb0716f6-dd35-4511-afa3-4cee5f7a2d50 req-c40ff07c-f390-48c5-8d3b-933f1d305815 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:16 np0005539552 nova_compute[233724]: 2025-11-29 08:28:16.738 233728 WARNING nova.compute.manager [req-bb0716f6-dd35-4511-afa3-4cee5f7a2d50 req-c40ff07c-f390-48c5-8d3b-933f1d305815 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:28:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:17 np0005539552 podman[295371]: 2025-11-29 08:28:17.580752919 +0000 UTC m=+0.077749883 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:28:17 np0005539552 podman[295372]: 2025-11-29 08:28:17.588772734 +0000 UTC m=+0.081600086 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.599 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for d6363749-92b8-41e7-860c-63dc695390e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.599 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404897.598454, d6363749-92b8-41e7-860c-63dc695390e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.599 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.603 233728 DEBUG nova.compute.manager [None req-35bfcfdd-d64e-4e73-96cf-b0fe6cc4d1b5 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.617 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.620 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.643 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.644 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404897.601395, d6363749-92b8-41e7-860c-63dc695390e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.644 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:28:17 np0005539552 podman[295383]: 2025-11-29 08:28:17.658062079 +0000 UTC m=+0.129669830 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.669 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.673 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:17 np0005539552 nova_compute[233724]: 2025-11-29 08:28:17.696 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 03:28:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:19.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 03:28:19 np0005539552 nova_compute[233724]: 2025-11-29 08:28:19.438 233728 INFO nova.compute.manager [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Unrescuing#033[00m
Nov 29 03:28:19 np0005539552 nova_compute[233724]: 2025-11-29 08:28:19.439 233728 DEBUG oslo_concurrency.lockutils [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:19 np0005539552 nova_compute[233724]: 2025-11-29 08:28:19.439 233728 DEBUG oslo_concurrency.lockutils [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:19 np0005539552 nova_compute[233724]: 2025-11-29 08:28:19.440 233728 DEBUG nova.network.neutron [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:20.633 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:20.635 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:20.636 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:20 np0005539552 nova_compute[233724]: 2025-11-29 08:28:20.717 233728 DEBUG nova.network.neutron [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:20 np0005539552 nova_compute[233724]: 2025-11-29 08:28:20.740 233728 DEBUG oslo_concurrency.lockutils [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:20 np0005539552 nova_compute[233724]: 2025-11-29 08:28:20.741 233728 DEBUG nova.objects.instance [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.137 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:21.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:21 np0005539552 kernel: tapa6621efc-29 (unregistering): left promiscuous mode
Nov 29 03:28:21 np0005539552 NetworkManager[48926]: <info>  [1764404901.2929] device (tapa6621efc-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00668|binding|INFO|Releasing lport a6621efc-2904-4858-9a98-9c441a64d2ff from this chassis (sb_readonly=0)
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00669|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff down in Southbound
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00670|binding|INFO|Removing iface tapa6621efc-29 ovn-installed in OVS
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.306 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.308 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.313 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.314 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.318 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.327 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.336 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[78a94fc9-f1b0-4906-834b-cebb6506c124]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 29 03:28:21 np0005539552 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000090.scope: Consumed 3.874s CPU time.
Nov 29 03:28:21 np0005539552 systemd-machined[196379]: Machine qemu-67-instance-00000090 terminated.
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.369 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5ece5b8a-c521-4757-a58e-a50ca249b87d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.372 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c9826-e10f-432a-b00b-6e5e3200e36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.402 233728 INFO nova.virt.libvirt.driver [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance destroyed successfully.#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.403 233728 DEBUG nova.objects.instance [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'numa_topology' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.404 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f54880ba-5e98-4465-9c94-00957f0be934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.428 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cd240f50-8599-4ee5-a58b-eebd9db4841c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295509, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.444 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[28599e9d-43db-4393-aa08-8db1c0e20e67]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295512, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295512, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.445 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.450 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.451 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.452 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.452 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.452 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:21 np0005539552 kernel: tapa6621efc-29: entered promiscuous mode
Nov 29 03:28:21 np0005539552 NetworkManager[48926]: <info>  [1764404901.4958] manager: (tapa6621efc-29): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00671|binding|INFO|Claiming lport a6621efc-2904-4858-9a98-9c441a64d2ff for this chassis.
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00672|binding|INFO|a6621efc-2904-4858-9a98-9c441a64d2ff: Claiming fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.497 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 systemd-udevd[295463]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.503 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.504 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.505 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00673|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff ovn-installed in OVS
Nov 29 03:28:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:21Z|00674|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff up in Southbound
Nov 29 03:28:21 np0005539552 NetworkManager[48926]: <info>  [1764404901.5147] device (tapa6621efc-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 NetworkManager[48926]: <info>  [1764404901.5159] device (tapa6621efc-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.518 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.523 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[79c4ccd3-c1e2-4039-bff6-3720569a0386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 systemd-machined[196379]: New machine qemu-68-instance-00000090.
Nov 29 03:28:21 np0005539552 systemd[1]: Started Virtual Machine qemu-68-instance-00000090.
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.554 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d831dc68-f03d-441e-948e-8f91b4118abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.557 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d6161e25-a0ea-4d4f-b8ab-ef83f130a904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.579 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0237f700-176d-4895-8858-25943f2494c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.592 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3586dcb8-0496-4061-a66c-b8603fbad8a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295538, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.606 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6e98816e-df66-4106-942e-d22cf406db89]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295540, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295540, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.607 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.609 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.610 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.610 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.611 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.611 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:21.611 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:21 np0005539552 nova_compute[233724]: 2025-11-29 08:28:21.614 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.092 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for d6363749-92b8-41e7-860c-63dc695390e4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.094 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404902.0920596, d6363749-92b8-41e7-860c-63dc695390e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.094 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:28:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.122 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.126 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.143 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.144 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404902.092442, d6363749-92b8-41e7-860c-63dc695390e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.144 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.167 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.171 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.187 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:28:22 np0005539552 nova_compute[233724]: 2025-11-29 08:28:22.698 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:23.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:23 np0005539552 nova_compute[233724]: 2025-11-29 08:28:23.506 233728 DEBUG nova.compute.manager [None req-9ec289e5-a7c7-4a97-ab96-669733d874ce 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:24 np0005539552 nova_compute[233724]: 2025-11-29 08:28:24.891 233728 DEBUG nova.compute.manager [req-7bd1d2d9-8251-459f-b448-7a1373fa5a71 req-34ff8d8b-e62b-477e-b477-7a535b9f170c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-unplugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:24 np0005539552 nova_compute[233724]: 2025-11-29 08:28:24.892 233728 DEBUG oslo_concurrency.lockutils [req-7bd1d2d9-8251-459f-b448-7a1373fa5a71 req-34ff8d8b-e62b-477e-b477-7a535b9f170c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:24 np0005539552 nova_compute[233724]: 2025-11-29 08:28:24.893 233728 DEBUG oslo_concurrency.lockutils [req-7bd1d2d9-8251-459f-b448-7a1373fa5a71 req-34ff8d8b-e62b-477e-b477-7a535b9f170c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:24 np0005539552 nova_compute[233724]: 2025-11-29 08:28:24.894 233728 DEBUG oslo_concurrency.lockutils [req-7bd1d2d9-8251-459f-b448-7a1373fa5a71 req-34ff8d8b-e62b-477e-b477-7a535b9f170c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:24 np0005539552 nova_compute[233724]: 2025-11-29 08:28:24.895 233728 DEBUG nova.compute.manager [req-7bd1d2d9-8251-459f-b448-7a1373fa5a71 req-34ff8d8b-e62b-477e-b477-7a535b9f170c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-unplugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:24 np0005539552 nova_compute[233724]: 2025-11-29 08:28:24.895 233728 WARNING nova.compute.manager [req-7bd1d2d9-8251-459f-b448-7a1373fa5a71 req-34ff8d8b-e62b-477e-b477-7a535b9f170c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-unplugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state None.#033[00m
Nov 29 03:28:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:25.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:26.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:26Z|00675|binding|INFO|Releasing lport 6f99f0ed-ee75-45c3-abe1-1afc889fd227 from this chassis (sb_readonly=0)
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.468 233728 DEBUG nova.compute.manager [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.469 233728 DEBUG nova.compute.manager [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing instance network info cache due to event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.469 233728 DEBUG oslo_concurrency.lockutils [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.469 233728 DEBUG oslo_concurrency.lockutils [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.469 233728 DEBUG nova.network.neutron [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.568 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.858 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.978 233728 DEBUG nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.979 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.980 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.980 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.980 233728 DEBUG nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.981 233728 WARNING nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state None.#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.981 233728 DEBUG nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.982 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.982 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.983 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.983 233728 DEBUG nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.984 233728 WARNING nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state None.#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.984 233728 DEBUG nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.985 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.985 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.985 233728 DEBUG oslo_concurrency.lockutils [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.986 233728 DEBUG nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] No waiting events found dispatching network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:26 np0005539552 nova_compute[233724]: 2025-11-29 08:28:26.987 233728 WARNING nova.compute.manager [req-2e991053-2050-489c-b27b-447162ea4cea req-de2b77fd-bf21-4da8-958d-eed7f5c745d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received unexpected event network-vif-plugged-a6621efc-2904-4858-9a98-9c441a64d2ff for instance with vm_state active and task_state None.#033[00m
Nov 29 03:28:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 03:28:27 np0005539552 nova_compute[233724]: 2025-11-29 08:28:27.699 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.613 233728 DEBUG nova.network.neutron [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updated VIF entry in instance network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.614 233728 DEBUG nova.network.neutron [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.630 233728 DEBUG oslo_concurrency.lockutils [req-fb7611d0-d6e7-4dcd-a0c9-5a82b07028e2 req-1ad550cc-7562-45bf-9e1a-a3565685dcd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.689 233728 DEBUG nova.compute.manager [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.690 233728 DEBUG nova.compute.manager [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing instance network info cache due to event network-changed-a6621efc-2904-4858-9a98-9c441a64d2ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.690 233728 DEBUG oslo_concurrency.lockutils [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.691 233728 DEBUG oslo_concurrency.lockutils [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:28 np0005539552 nova_compute[233724]: 2025-11-29 08:28:28.691 233728 DEBUG nova.network.neutron [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Refreshing network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:29.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:30.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:30 np0005539552 nova_compute[233724]: 2025-11-29 08:28:30.491 233728 DEBUG nova.network.neutron [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updated VIF entry in instance network info cache for port a6621efc-2904-4858-9a98-9c441a64d2ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:30 np0005539552 nova_compute[233724]: 2025-11-29 08:28:30.492 233728 DEBUG nova.network.neutron [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [{"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:30 np0005539552 nova_compute[233724]: 2025-11-29 08:28:30.509 233728 DEBUG oslo_concurrency.lockutils [req-1919a3e6-9bfa-436d-80ce-dd6fd2b73a3d req-588fd8e7-ee3e-4800-84f9-57c241d3e1fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d6363749-92b8-41e7-860c-63dc695390e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:31.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:31 np0005539552 nova_compute[233724]: 2025-11-29 08:28:31.618 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:32 np0005539552 nova_compute[233724]: 2025-11-29 08:28:32.701 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:34.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:34 np0005539552 nova_compute[233724]: 2025-11-29 08:28:34.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:34 np0005539552 nova_compute[233724]: 2025-11-29 08:28:34.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:34 np0005539552 nova_compute[233724]: 2025-11-29 08:28:34.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:34 np0005539552 nova_compute[233724]: 2025-11-29 08:28:34.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:34 np0005539552 nova_compute[233724]: 2025-11-29 08:28:34.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:28:34 np0005539552 nova_compute[233724]: 2025-11-29 08:28:34.946 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:35.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:35Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:28:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1006826391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.389 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.460 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.460 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.461 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.464 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.464 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.465 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.640 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.641 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3820MB free_disk=20.77667999267578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.641 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.642 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.737 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance bb261893-bfa1-4fdc-9c11-a33a733337ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.738 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance d6363749-92b8-41e7-860c-63dc695390e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.738 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.738 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:28:35 np0005539552 nova_compute[233724]: 2025-11-29 08:28:35.816 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:36.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2998223228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:36 np0005539552 nova_compute[233724]: 2025-11-29 08:28:36.252 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:36 np0005539552 nova_compute[233724]: 2025-11-29 08:28:36.260 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:36 np0005539552 nova_compute[233724]: 2025-11-29 08:28:36.274 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:36 np0005539552 nova_compute[233724]: 2025-11-29 08:28:36.297 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:28:36 np0005539552 nova_compute[233724]: 2025-11-29 08:28:36.298 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:36 np0005539552 nova_compute[233724]: 2025-11-29 08:28:36.620 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:37.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:37 np0005539552 nova_compute[233724]: 2025-11-29 08:28:37.702 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:38.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:28:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1947299602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:28:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:28:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1947299602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:28:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:39.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:39 np0005539552 nova_compute[233724]: 2025-11-29 08:28:39.299 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:39 np0005539552 nova_compute[233724]: 2025-11-29 08:28:39.299 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:39 np0005539552 nova_compute[233724]: 2025-11-29 08:28:39.300 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:28:39 np0005539552 nova_compute[233724]: 2025-11-29 08:28:39.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:40.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:40 np0005539552 nova_compute[233724]: 2025-11-29 08:28:40.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:40 np0005539552 nova_compute[233724]: 2025-11-29 08:28:40.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:41 np0005539552 nova_compute[233724]: 2025-11-29 08:28:41.623 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:41 np0005539552 nova_compute[233724]: 2025-11-29 08:28:41.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.128 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:42.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.589 233728 DEBUG oslo_concurrency.lockutils [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.589 233728 DEBUG oslo_concurrency.lockutils [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.624 233728 INFO nova.compute.manager [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Detaching volume 213d91c6-0ee8-47c3-965c-92c80077e9ee#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.703 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.824 233728 INFO nova.virt.block_device [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Attempting to driver detach volume 213d91c6-0ee8-47c3-965c-92c80077e9ee from mountpoint /dev/vdb#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.837 233728 DEBUG nova.virt.libvirt.driver [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Attempting to detach device vdb from instance d6363749-92b8-41e7-860c-63dc695390e4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.838 233728 DEBUG nova.virt.libvirt.guest [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-213d91c6-0ee8-47c3-965c-92c80077e9ee">
Nov 29 03:28:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <serial>213d91c6-0ee8-47c3-965c-92c80077e9ee</serial>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:28:42 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.845 233728 INFO nova.virt.libvirt.driver [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully detached device vdb from instance d6363749-92b8-41e7-860c-63dc695390e4 from the persistent domain config.#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.846 233728 DEBUG nova.virt.libvirt.driver [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d6363749-92b8-41e7-860c-63dc695390e4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.846 233728 DEBUG nova.virt.libvirt.guest [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-213d91c6-0ee8-47c3-965c-92c80077e9ee">
Nov 29 03:28:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <serial>213d91c6-0ee8-47c3-965c-92c80077e9ee</serial>
Nov 29 03:28:42 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:28:42 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:28:42 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.896 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764404922.8962474, d6363749-92b8-41e7-860c-63dc695390e4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.898 233728 DEBUG nova.virt.libvirt.driver [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d6363749-92b8-41e7-860c-63dc695390e4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:28:42 np0005539552 nova_compute[233724]: 2025-11-29 08:28:42.900 233728 INFO nova.virt.libvirt.driver [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully detached device vdb from instance d6363749-92b8-41e7-860c-63dc695390e4 from the live domain config.#033[00m
Nov 29 03:28:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:43 np0005539552 nova_compute[233724]: 2025-11-29 08:28:43.163 233728 DEBUG nova.objects.instance [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'flavor' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:43 np0005539552 nova_compute[233724]: 2025-11-29 08:28:43.199 233728 DEBUG oslo_concurrency.lockutils [None req-bda39776-a792-4b2a-8059-4ab5a5ddd80e 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:44.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.274 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.275 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.274 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.275 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.393 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.394 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.395 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "d6363749-92b8-41e7-860c-63dc695390e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.396 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.396 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.398 233728 INFO nova.compute.manager [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Terminating instance#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.399 233728 DEBUG nova.compute.manager [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:28:44 np0005539552 kernel: tapa6621efc-29 (unregistering): left promiscuous mode
Nov 29 03:28:44 np0005539552 NetworkManager[48926]: <info>  [1764404924.5923] device (tapa6621efc-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00676|binding|INFO|Releasing lport a6621efc-2904-4858-9a98-9c441a64d2ff from this chassis (sb_readonly=0)
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.603 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00677|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff down in Southbound
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00678|binding|INFO|Removing iface tapa6621efc-29 ovn-installed in OVS
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.616 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.619 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.620 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.625 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.644 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9f00f012-d3bf-4950-91b0-f5ecee7b21f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 29 03:28:44 np0005539552 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000090.scope: Consumed 14.123s CPU time.
Nov 29 03:28:44 np0005539552 systemd-machined[196379]: Machine qemu-68-instance-00000090 terminated.
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.678 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6d67d791-92ff-47af-a6ae-34cf1cce8436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.682 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c76892-b899-457f-b5d3-8edb8a4bd561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.722 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[696ce82a-dadf-4e27-949d-bb0f5faaf0be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.749 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[15057f8b-6775-4a1f-adc7-d6aa17f03353]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295877, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.772 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b66aa358-4f32-4e16-b773-1fdb019f1ae9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295878, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295878, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.774 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.775 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.780 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.781 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.781 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.781 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:44 np0005539552 kernel: tapa6621efc-29: entered promiscuous mode
Nov 29 03:28:44 np0005539552 kernel: tapa6621efc-29 (unregistering): left promiscuous mode
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00679|binding|INFO|Claiming lport a6621efc-2904-4858-9a98-9c441a64d2ff for this chassis.
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.824 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00680|binding|INFO|a6621efc-2904-4858-9a98-9c441a64d2ff: Claiming fa:16:3e:ba:8a:8e 10.100.0.10
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.832 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.833 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 bound to our chassis#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.835 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.838 233728 INFO nova.virt.libvirt.driver [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Instance destroyed successfully.#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.839 233728 DEBUG nova.objects.instance [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'resources' on Instance uuid d6363749-92b8-41e7-860c-63dc695390e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00681|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff ovn-installed in OVS
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00682|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff up in Southbound
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00683|binding|INFO|Releasing lport a6621efc-2904-4858-9a98-9c441a64d2ff from this chassis (sb_readonly=1)
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.846 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00684|if_status|INFO|Dropped 2 log messages in last 116 seconds (most recently, 116 seconds ago) due to excessive rate
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00685|if_status|INFO|Not setting lport a6621efc-2904-4858-9a98-9c441a64d2ff down as sb is readonly
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00686|binding|INFO|Removing iface tapa6621efc-29 ovn-installed in OVS
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.848 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00687|binding|INFO|Releasing lport a6621efc-2904-4858-9a98-9c441a64d2ff from this chassis (sb_readonly=1)
Nov 29 03:28:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:44Z|00688|binding|INFO|Setting lport a6621efc-2904-4858-9a98-9c441a64d2ff down in Southbound
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.852 233728 DEBUG nova.virt.libvirt.vif [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-474867473',display_name='tempest-ServerRescueNegativeTestJSON-server-474867473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-474867473',id=144,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIKkQ7Zazeo/dx6F4Eu6MN8OAjR4BxckLM+7ouW/olmDfJC62bOHNKRmGAyxOWHpYnYgRnTecW30ZoVhQUqa4XTjBKJkd20WTjX5TvwIkgUKRgDOuqdsmup3NfferXDEOw==',key_name='tempest-keypair-472027593',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-knv00qym',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:28:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='283f8136265e4425a5a31f840935b9ab',uuid=d6363749-92b8-41e7-860c-63dc695390e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.853 233728 DEBUG nova.network.os_vif_util [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "a6621efc-2904-4858-9a98-9c441a64d2ff", "address": "fa:16:3e:ba:8a:8e", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6621efc-29", "ovs_interfaceid": "a6621efc-2904-4858-9a98-9c441a64d2ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.853 233728 DEBUG nova.network.os_vif_util [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.853 233728 DEBUG os_vif [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.855 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.855 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6621efc-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.857 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.858 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:8a:8e 10.100.0.10'], port_security=['fa:16:3e:ba:8a:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd6363749-92b8-41e7-860c-63dc695390e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d3fe511-bc37-4fc4-9176-e8e88cafdead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a6621efc-2904-4858-9a98-9c441a64d2ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.858 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5a2df3-db78-4fc6-9f3d-5ced9afdd8d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.862 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.866 233728 INFO os_vif [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:8a:8e,bridge_name='br-int',has_traffic_filtering=True,id=a6621efc-2904-4858-9a98-9c441a64d2ff,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6621efc-29')#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.894 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[259385ae-567b-4caf-9c4a-aaf5adf6dbdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.897 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6a400d23-6ec2-4ac3-9f31-0cf59bf6cc0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.936 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[06a38a39-f020-46a3-936b-75ec7aac50e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.954 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[76dbd614-aaee-493b-aaa7-502b14962422]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295912, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.974 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1f601426-2708-4400-9ea6-95567d7c6617]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295913, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295913, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.975 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 nova_compute[233724]: 2025-11-29 08:28:44.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.979 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.979 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.979 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.980 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.981 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a6621efc-2904-4858-9a98-9c441a64d2ff in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:28:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:44.983 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.000 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[252866ec-63a2-430d-87e7-7e13b9f8cc08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.031 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad13321-0c0c-4702-bd8a-05f33fa73697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.035 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[39f49616-d241-4fe4-8a87-3d1cb2e08ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.067 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2d79c2c9-f66d-4a4b-90b9-90128d3009ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.088 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dee81add-380a-4d4d-b8e2-57ffca93f940]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca67fce-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:3c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785715, 'reachable_time': 15430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295920, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.105 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5b08b4be-53b3-4a28-a12a-b3122fbce371]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785728, 'tstamp': 785728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295921, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4ca67fce-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785731, 'tstamp': 785731}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295921, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.106 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.108 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.110 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.110 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca67fce-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.110 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.111 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca67fce-60, col_values=(('external_ids', {'iface-id': '6f99f0ed-ee75-45c3-abe1-1afc889fd227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:45.111 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.260 233728 INFO nova.virt.libvirt.driver [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Deleting instance files /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4_del#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.262 233728 INFO nova.virt.libvirt.driver [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Deletion of /var/lib/nova/instances/d6363749-92b8-41e7-860c-63dc695390e4_del complete#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.334 233728 INFO nova.compute.manager [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.335 233728 DEBUG oslo.service.loopingcall [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.335 233728 DEBUG nova.compute.manager [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.335 233728 DEBUG nova.network.neutron [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:28:45 np0005539552 nova_compute[233724]: 2025-11-29 08:28:45.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.099 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.099 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.099 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.100 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:46.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.550 233728 DEBUG nova.network.neutron [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.573 233728 INFO nova.compute.manager [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.624 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.626 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.627 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.644 233728 DEBUG nova.compute.manager [req-1dd12697-e78b-4fc9-b941-922d72c6d974 req-71d42802-834e-48ef-b7e7-477a679ecd53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Received event network-vif-deleted-a6621efc-2904-4858-9a98-9c441a64d2ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:46 np0005539552 nova_compute[233724]: 2025-11-29 08:28:46.779 233728 DEBUG oslo_concurrency.processutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/991832023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:47 np0005539552 nova_compute[233724]: 2025-11-29 08:28:47.220 233728 DEBUG oslo_concurrency.processutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:47 np0005539552 nova_compute[233724]: 2025-11-29 08:28:47.229 233728 DEBUG nova.compute.provider_tree [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:47.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:47 np0005539552 nova_compute[233724]: 2025-11-29 08:28:47.258 233728 DEBUG nova.scheduler.client.report [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:47 np0005539552 nova_compute[233724]: 2025-11-29 08:28:47.290 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:47 np0005539552 nova_compute[233724]: 2025-11-29 08:28:47.395 233728 INFO nova.scheduler.client.report [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Deleted allocations for instance d6363749-92b8-41e7-860c-63dc695390e4#033[00m
Nov 29 03:28:47 np0005539552 nova_compute[233724]: 2025-11-29 08:28:47.485 233728 DEBUG oslo_concurrency.lockutils [None req-afdc9079-9a22-4446-ad90-feb8a0bf5ba8 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "d6363749-92b8-41e7-860c-63dc695390e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:47 np0005539552 podman[295996]: 2025-11-29 08:28:47.972403355 +0000 UTC m=+0.060790677 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:28:47 np0005539552 podman[295995]: 2025-11-29 08:28:47.973843673 +0000 UTC m=+0.064723562 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:28:48 np0005539552 podman[295997]: 2025-11-29 08:28:48.004763485 +0000 UTC m=+0.092941582 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:49.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:49 np0005539552 nova_compute[233724]: 2025-11-29 08:28:49.796 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updating instance_info_cache with network_info: [{"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:49 np0005539552 nova_compute[233724]: 2025-11-29 08:28:49.855 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-bb261893-bfa1-4fdc-9c11-a33a733337ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:49 np0005539552 nova_compute[233724]: 2025-11-29 08:28:49.856 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:28:49 np0005539552 nova_compute[233724]: 2025-11-29 08:28:49.859 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:50.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:51.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:51 np0005539552 nova_compute[233724]: 2025-11-29 08:28:51.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:52.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:52 np0005539552 nova_compute[233724]: 2025-11-29 08:28:52.852 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:53.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.027 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.028 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.076 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:28:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.369 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.370 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.377 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.378 233728 INFO nova.compute.claims [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.561 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.561 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.561 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.562 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.562 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.563 233728 INFO nova.compute.manager [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Terminating instance#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.564 233728 DEBUG nova.compute.manager [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.591 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:54 np0005539552 kernel: tap0e2e8c1e-12 (unregistering): left promiscuous mode
Nov 29 03:28:54 np0005539552 NetworkManager[48926]: <info>  [1764404934.6445] device (tap0e2e8c1e-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:54Z|00689|binding|INFO|Releasing lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d from this chassis (sb_readonly=0)
Nov 29 03:28:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:54Z|00690|binding|INFO|Setting lport 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d down in Southbound
Nov 29 03:28:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:28:54Z|00691|binding|INFO|Removing iface tap0e2e8c1e-12 ovn-installed in OVS
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.651 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.660 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:4b:49 10.100.0.13'], port_security=['fa:16:3e:72:4b:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb261893-bfa1-4fdc-9c11-a33a733337ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea7b24ea9d7b4d239b4741634ac3f10c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '525789b3-2118-4a66-bac0-ed0947cafa2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2992474f-d5bf-4893-b4cc-2c774a8a9871, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.663 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d in datapath 4ca67fce-6116-4a0b-b0a9-c25b5adaad19 unbound from our chassis#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.666 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca67fce-6116-4a0b-b0a9-c25b5adaad19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.668 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6a759286-1738-40de-822f-8af6a1482cdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.669 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 namespace which is not needed anymore#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.689 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539552 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 29 03:28:54 np0005539552 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008c.scope: Consumed 18.823s CPU time.
Nov 29 03:28:54 np0005539552 systemd-machined[196379]: Machine qemu-63-instance-0000008c terminated.
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.809 233728 INFO nova.virt.libvirt.driver [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Instance destroyed successfully.#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.810 233728 DEBUG nova.objects.instance [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lazy-loading 'resources' on Instance uuid bb261893-bfa1-4fdc-9c11-a33a733337ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:54 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [NOTICE]   (292932) : haproxy version is 2.8.14-c23fe91
Nov 29 03:28:54 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [NOTICE]   (292932) : path to executable is /usr/sbin/haproxy
Nov 29 03:28:54 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [WARNING]  (292932) : Exiting Master process...
Nov 29 03:28:54 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [WARNING]  (292932) : Exiting Master process...
Nov 29 03:28:54 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [ALERT]    (292932) : Current worker (292934) exited with code 143 (Terminated)
Nov 29 03:28:54 np0005539552 neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19[292928]: [WARNING]  (292932) : All workers exited. Exiting... (0)
Nov 29 03:28:54 np0005539552 systemd[1]: libpod-9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1.scope: Deactivated successfully.
Nov 29 03:28:54 np0005539552 podman[296110]: 2025-11-29 08:28:54.835195332 +0000 UTC m=+0.054991651 container died 9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.848 233728 DEBUG nova.virt.libvirt.vif [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1410922991',display_name='tempest-ServerRescueNegativeTestJSON-server-1410922991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1410922991',id=140,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea7b24ea9d7b4d239b4741634ac3f10c',ramdisk_id='',reservation_id='r-u00jfp1f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2045177058',owner_user_name='tempest-ServerRescueNegativeTestJSON-2045177058-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:52Z,user_data=None,user_id='283f8136265e4425a5a31f840935b9ab',uuid=bb261893-bfa1-4fdc-9c11-a33a733337ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.849 233728 DEBUG nova.network.os_vif_util [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converting VIF {"id": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "address": "fa:16:3e:72:4b:49", "network": {"id": "4ca67fce-6116-4a0b-b0a9-c25b5adaad19", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-580908283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea7b24ea9d7b4d239b4741634ac3f10c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e2e8c1e-12", "ovs_interfaceid": "0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.851 233728 DEBUG nova.network.os_vif_util [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.851 233728 DEBUG os_vif [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.854 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.855 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e2e8c1e-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.857 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.863 233728 INFO os_vif [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:4b:49,bridge_name='br-int',has_traffic_filtering=True,id=0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d,network=Network(4ca67fce-6116-4a0b-b0a9-c25b5adaad19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e2e8c1e-12')#033[00m
Nov 29 03:28:54 np0005539552 systemd[1]: var-lib-containers-storage-overlay-4f370aeb1afde6a02a55d004f4439bcaf12cf73c643d93dfbb6adcff9bd5968e-merged.mount: Deactivated successfully.
Nov 29 03:28:54 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1-userdata-shm.mount: Deactivated successfully.
Nov 29 03:28:54 np0005539552 podman[296110]: 2025-11-29 08:28:54.884690113 +0000 UTC m=+0.104486442 container cleanup 9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:28:54 np0005539552 systemd[1]: libpod-conmon-9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1.scope: Deactivated successfully.
Nov 29 03:28:54 np0005539552 podman[296164]: 2025-11-29 08:28:54.955138159 +0000 UTC m=+0.047938651 container remove 9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.961 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d322aeb4-8f4f-40ad-8236-f903e4953bcb]: (4, ('Sat Nov 29 08:28:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1)\n9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1\nSat Nov 29 08:28:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 (9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1)\n9d54d8082fd4a641266c7257a37f309d64b779796f64c13a1ae3acbec0fe80f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.963 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3e5d42-e810-4221-b836-f9a0cabd97cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.964 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca67fce-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539552 kernel: tap4ca67fce-60: left promiscuous mode
Nov 29 03:28:54 np0005539552 nova_compute[233724]: 2025-11-29 08:28:54.985 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:54.987 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[404cc01f-847f-4356-b6ac-f134e87af556]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:55.003 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[505bb9d6-cc39-43c8-975e-e7292ca74fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:55.004 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[887ef21a-163c-4827-9f29-463b0fa18d48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1355403263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:55.019 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1927f1-3b07-4aa0-b5d3-82590ef2da7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785708, 'reachable_time': 29019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296185, 'error': None, 'target': 'ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:55 np0005539552 systemd[1]: run-netns-ovnmeta\x2d4ca67fce\x2d6116\x2d4a0b\x2db0a9\x2dc25b5adaad19.mount: Deactivated successfully.
Nov 29 03:28:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:55.023 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca67fce-6116-4a0b-b0a9-c25b5adaad19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:28:55 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:28:55.023 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[85e4fbd8-1c63-4bc7-bfff-d4fba7c7df94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.041 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.046 233728 DEBUG nova.compute.provider_tree [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.077 233728 DEBUG nova.scheduler.client.report [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.138 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.139 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.240 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.242 233728 DEBUG nova.network.neutron [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:28:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:55.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.272 233728 INFO nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.324 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.489 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.491 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.491 233728 INFO nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Creating image(s)#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.521 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.550 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.571 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.575 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.657 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.658 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.659 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.659 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.683 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.688 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 101e7b80-d529-4f2a-87df-44512ead5b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.836 233728 DEBUG nova.policy [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:28:55 np0005539552 nova_compute[233724]: 2025-11-29 08:28:55.995 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 101e7b80-d529-4f2a-87df-44512ead5b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.075 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:28:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:56.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.221 233728 DEBUG nova.objects.instance [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 101e7b80-d529-4f2a-87df-44512ead5b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.238 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.239 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Ensure instance console log exists: /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.239 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.240 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.240 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.591 233728 INFO nova.virt.libvirt.driver [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Deleting instance files /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce_del#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.592 233728 INFO nova.virt.libvirt.driver [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Deletion of /var/lib/nova/instances/bb261893-bfa1-4fdc-9c11-a33a733337ce_del complete#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.631 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.899 233728 INFO nova.compute.manager [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Took 2.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.902 233728 DEBUG oslo.service.loopingcall [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.902 233728 DEBUG nova.compute.manager [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:28:56 np0005539552 nova_compute[233724]: 2025-11-29 08:28:56.903 233728 DEBUG nova.network.neutron [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:28:57 np0005539552 nova_compute[233724]: 2025-11-29 08:28:57.004 233728 DEBUG nova.network.neutron [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Successfully created port: 41f8db2f-85ae-4916-91a0-fedefca2c76e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:28:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:58.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.308 233728 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-unplugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.309 233728 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.309 233728 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.309 233728 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.310 233728 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-unplugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.310 233728 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-unplugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.311 233728 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.311 233728 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.312 233728 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.312 233728 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.312 233728 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] No waiting events found dispatching network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.313 233728 WARNING nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received unexpected event network-vif-plugged-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d for instance with vm_state rescued and task_state deleting.#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.424 233728 DEBUG nova.network.neutron [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.484 233728 INFO nova.compute.manager [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.532 233728 DEBUG nova.network.neutron [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Successfully updated port: 41f8db2f-85ae-4916-91a0-fedefca2c76e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.562 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.562 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.571 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.571 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.572 233728 DEBUG nova.network.neutron [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.651 233728 DEBUG oslo_concurrency.processutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.701 233728 DEBUG nova.compute.manager [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-changed-41f8db2f-85ae-4916-91a0-fedefca2c76e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.702 233728 DEBUG nova.compute.manager [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Refreshing instance network info cache due to event network-changed-41f8db2f-85ae-4916-91a0-fedefca2c76e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:58 np0005539552 nova_compute[233724]: 2025-11-29 08:28:58.703 233728 DEBUG oslo_concurrency.lockutils [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2524287970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.112 233728 DEBUG oslo_concurrency.processutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.120 233728 DEBUG nova.compute.provider_tree [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.140 233728 DEBUG nova.scheduler.client.report [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:28:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:59.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.340 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.374 233728 INFO nova.scheduler.client.report [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Deleted allocations for instance bb261893-bfa1-4fdc-9c11-a33a733337ce#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.414 233728 DEBUG nova.network.neutron [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.461 233728 DEBUG oslo_concurrency.lockutils [None req-d7329d74-50e5-49d8-88f0-2ab4dead2648 283f8136265e4425a5a31f840935b9ab ea7b24ea9d7b4d239b4741634ac3f10c - - default default] Lock "bb261893-bfa1-4fdc-9c11-a33a733337ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.837 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404924.8361442, d6363749-92b8-41e7-860c-63dc695390e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.838 233728 INFO nova.compute.manager [-] [instance: d6363749-92b8-41e7-860c-63dc695390e4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.858 233728 DEBUG nova.compute.manager [None req-4454a1bd-7153-49e7-ad5b-9838e90dbbf4 - - - - - -] [instance: d6363749-92b8-41e7-860c-63dc695390e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:59 np0005539552 nova_compute[233724]: 2025-11-29 08:28:59.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:00.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:00 np0005539552 nova_compute[233724]: 2025-11-29 08:29:00.455 233728 DEBUG nova.compute.manager [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Received event network-vif-deleted-0e2e8c1e-12c0-48fa-86c0-e8f3ab60421d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:01.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:01 np0005539552 nova_compute[233724]: 2025-11-29 08:29:01.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:02.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:03.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.377 233728 DEBUG nova.network.neutron [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updating instance_info_cache with network_info: [{"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.423 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.423 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Instance network_info: |[{"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.423 233728 DEBUG oslo_concurrency.lockutils [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.424 233728 DEBUG nova.network.neutron [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Refreshing network info cache for port 41f8db2f-85ae-4916-91a0-fedefca2c76e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.427 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Start _get_guest_xml network_info=[{"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.432 233728 WARNING nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.438 233728 DEBUG nova.virt.libvirt.host [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.438 233728 DEBUG nova.virt.libvirt.host [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.441 233728 DEBUG nova.virt.libvirt.host [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.441 233728 DEBUG nova.virt.libvirt.host [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.442 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.443 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.443 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.443 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.444 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.444 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.444 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.444 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.445 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.445 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.445 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.446 233728 DEBUG nova.virt.hardware [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.449 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1032099395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.940 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.964 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:03 np0005539552 nova_compute[233724]: 2025-11-29 08:29:03.969 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:04.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/455850950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.416 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.419 233728 DEBUG nova.virt.libvirt.vif [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-594734656',display_name='tempest-₡-594734656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest--594734656',id=147,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-lqhdbprg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:55Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=101e7b80-d529-4f2a-87df-44512ead5b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.420 233728 DEBUG nova.network.os_vif_util [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.421 233728 DEBUG nova.network.os_vif_util [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.423 233728 DEBUG nova.objects.instance [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 101e7b80-d529-4f2a-87df-44512ead5b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.446 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <uuid>101e7b80-d529-4f2a-87df-44512ead5b00</uuid>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <name>instance-00000093</name>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:name>tempest-₡-594734656</nova:name>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:29:03</nova:creationTime>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <nova:port uuid="41f8db2f-85ae-4916-91a0-fedefca2c76e">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <entry name="serial">101e7b80-d529-4f2a-87df-44512ead5b00</entry>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <entry name="uuid">101e7b80-d529-4f2a-87df-44512ead5b00</entry>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/101e7b80-d529-4f2a-87df-44512ead5b00_disk">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/101e7b80-d529-4f2a-87df-44512ead5b00_disk.config">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:02:b7:ca"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <target dev="tap41f8db2f-85"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/console.log" append="off"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:29:04 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:29:04 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:29:04 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:29:04 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.448 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Preparing to wait for external event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.448 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.449 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.449 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.450 233728 DEBUG nova.virt.libvirt.vif [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-594734656',display_name='tempest-₡-594734656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest--594734656',id=147,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-lqhdbprg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:55Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=101e7b80-d529-4f2a-87df-44512ead5b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.450 233728 DEBUG nova.network.os_vif_util [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.451 233728 DEBUG nova.network.os_vif_util [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.452 233728 DEBUG os_vif [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.453 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.453 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.454 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.457 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41f8db2f-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.457 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41f8db2f-85, col_values=(('external_ids', {'iface-id': '41f8db2f-85ae-4916-91a0-fedefca2c76e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b7:ca', 'vm-uuid': '101e7b80-d529-4f2a-87df-44512ead5b00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.459 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:04 np0005539552 NetworkManager[48926]: <info>  [1764404944.4606] manager: (tap41f8db2f-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.462 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.466 233728 INFO os_vif [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85')#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.533 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.534 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.534 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:02:b7:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.535 233728 INFO nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Using config drive#033[00m
Nov 29 03:29:04 np0005539552 nova_compute[233724]: 2025-11-29 08:29:04.557 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.039 233728 INFO nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Creating config drive at /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/disk.config#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.047 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgbwpt7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.199 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgbwpt7e" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.237 233728 DEBUG nova.storage.rbd_utils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 101e7b80-d529-4f2a-87df-44512ead5b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.241 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/disk.config 101e7b80-d529-4f2a-87df-44512ead5b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:05.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.426 233728 DEBUG oslo_concurrency.processutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/disk.config 101e7b80-d529-4f2a-87df-44512ead5b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.427 233728 INFO nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Deleting local config drive /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:05 np0005539552 kernel: tap41f8db2f-85: entered promiscuous mode
Nov 29 03:29:05 np0005539552 NetworkManager[48926]: <info>  [1764404945.4829] manager: (tap41f8db2f-85): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Nov 29 03:29:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:05Z|00692|binding|INFO|Claiming lport 41f8db2f-85ae-4916-91a0-fedefca2c76e for this chassis.
Nov 29 03:29:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:05Z|00693|binding|INFO|41f8db2f-85ae-4916-91a0-fedefca2c76e: Claiming fa:16:3e:02:b7:ca 10.100.0.5
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.485 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.496 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:ca 10.100.0.5'], port_security=['fa:16:3e:02:b7:ca 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '101e7b80-d529-4f2a-87df-44512ead5b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=41f8db2f-85ae-4916-91a0-fedefca2c76e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.499 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 41f8db2f-85ae-4916-91a0-fedefca2c76e in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.501 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:29:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:05Z|00694|binding|INFO|Setting lport 41f8db2f-85ae-4916-91a0-fedefca2c76e ovn-installed in OVS
Nov 29 03:29:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:05Z|00695|binding|INFO|Setting lport 41f8db2f-85ae-4916-91a0-fedefca2c76e up in Southbound
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539552 systemd-machined[196379]: New machine qemu-69-instance-00000093.
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.519 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[adf6fb48-cd2f-4006-8b4d-65bf7847e735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.520 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14ea2b48-91 in ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.523 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14ea2b48-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.523 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b41ffca4-a16e-4894-8e65-b4257cb95a02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.523 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f6335d72-d2d7-47e7-b182-18184b73e6b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 systemd[1]: Started Virtual Machine qemu-69-instance-00000093.
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.542 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[04eeb2b3-d6e7-4d48-8ea4-6aab451a0b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.557 233728 DEBUG nova.network.neutron [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updated VIF entry in instance network info cache for port 41f8db2f-85ae-4916-91a0-fedefca2c76e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.557 233728 DEBUG nova.network.neutron [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updating instance_info_cache with network_info: [{"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:05 np0005539552 systemd-udevd[296576]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.569 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[63a86ec6-4ae9-492c-980b-6a6d21e26a30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 NetworkManager[48926]: <info>  [1764404945.5860] device (tap41f8db2f-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:05 np0005539552 NetworkManager[48926]: <info>  [1764404945.5874] device (tap41f8db2f-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.603 233728 DEBUG oslo_concurrency.lockutils [req-0a1b0ea5-1ba3-4363-8884-032fa080fbd3 req-be6044bf-e578-42cb-b353-80d38108fb57 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.611 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6e7d6e-5825-47cb-83d3-3c4b8af20b9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 NetworkManager[48926]: <info>  [1764404945.6170] manager: (tap14ea2b48-90): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.616 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[335b70c2-696f-487b-948d-e46284eaec2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.656 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b38aa7-d1e6-4ef2-a829-168e912315a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.661 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[082149b1-4699-4f72-8670-8feffc01ef99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 NetworkManager[48926]: <info>  [1764404945.6903] device (tap14ea2b48-90): carrier: link connected
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.699 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[056c046c-1255-4594-868a-a3b33245a168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.723 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ea04bfa1-203a-46ba-be98-e41478e38b81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799137, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296606, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.742 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[59367c16-5e1a-410d-bbdb-4760037aab5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:168b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799137, 'tstamp': 799137}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296607, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.764 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[997df571-c4fa-412d-8c49-a77e5d094a8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799137, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296608, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.798 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f04db96-abb6-4dfa-940a-07a1d69fadcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.801 233728 DEBUG nova.compute.manager [req-5f82e20f-4cac-43af-9e0c-90b48a5b8973 req-92182bfc-9435-4c96-894f-f8e43fabdbba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.802 233728 DEBUG oslo_concurrency.lockutils [req-5f82e20f-4cac-43af-9e0c-90b48a5b8973 req-92182bfc-9435-4c96-894f-f8e43fabdbba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.802 233728 DEBUG oslo_concurrency.lockutils [req-5f82e20f-4cac-43af-9e0c-90b48a5b8973 req-92182bfc-9435-4c96-894f-f8e43fabdbba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.802 233728 DEBUG oslo_concurrency.lockutils [req-5f82e20f-4cac-43af-9e0c-90b48a5b8973 req-92182bfc-9435-4c96-894f-f8e43fabdbba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.803 233728 DEBUG nova.compute.manager [req-5f82e20f-4cac-43af-9e0c-90b48a5b8973 req-92182bfc-9435-4c96-894f-f8e43fabdbba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Processing event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.878 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4b38ccd9-887e-4437-abcd-964b26f42f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.880 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.880 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.881 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:05 np0005539552 kernel: tap14ea2b48-90: entered promiscuous mode
Nov 29 03:29:05 np0005539552 NetworkManager[48926]: <info>  [1764404945.8839] manager: (tap14ea2b48-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.889 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:05Z|00696|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.891 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.895 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.896 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4df2a40d-4e51-41d4-b626-83981170d3ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.897 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:05.898 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'env', 'PROCESS_TAG=haproxy-14ea2b48-9984-443b-82fc-568ae98723fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14ea2b48-9984-443b-82fc-568ae98723fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:05 np0005539552 nova_compute[233724]: 2025-11-29 08:29:05.906 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.214 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.215 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404946.2139237, 101e7b80-d529-4f2a-87df-44512ead5b00 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.216 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.219 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.223 233728 INFO nova.virt.libvirt.driver [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Instance spawned successfully.#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.223 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.252 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.257 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.262 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.262 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.263 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.263 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.264 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.264 233728 DEBUG nova.virt.libvirt.driver [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.294 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.295 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404946.2142708, 101e7b80-d529-4f2a-87df-44512ead5b00 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.295 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:06 np0005539552 podman[296680]: 2025-11-29 08:29:06.30757711 +0000 UTC m=+0.055232937 container create 6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.323 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.327 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404946.2199125, 101e7b80-d529-4f2a-87df-44512ead5b00 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.327 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.333 233728 INFO nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Took 10.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.334 233728 DEBUG nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539552 systemd[1]: Started libpod-conmon-6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322.scope.
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.365 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.368 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:06 np0005539552 podman[296680]: 2025-11-29 08:29:06.278140238 +0000 UTC m=+0.025796095 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:06 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:29:06 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560a9a27e62ef0431177964e4874761eba8a850e56c313ced87e439b5cfaec5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.396 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:06 np0005539552 podman[296680]: 2025-11-29 08:29:06.407272013 +0000 UTC m=+0.154927840 container init 6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.410 233728 INFO nova.compute.manager [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Took 12.25 seconds to build instance.#033[00m
Nov 29 03:29:06 np0005539552 podman[296680]: 2025-11-29 08:29:06.419042209 +0000 UTC m=+0.166698016 container start 6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.426 233728 DEBUG oslo_concurrency.lockutils [None req-64be9476-d86b-457a-a52c-f64170fc6a07 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:06 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [NOTICE]   (296699) : New worker (296701) forked
Nov 29 03:29:06 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [NOTICE]   (296699) : Loading success.
Nov 29 03:29:06 np0005539552 nova_compute[233724]: 2025-11-29 08:29:06.636 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:07.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:08 np0005539552 nova_compute[233724]: 2025-11-29 08:29:08.058 233728 DEBUG nova.compute.manager [req-90137ffd-fd7b-40bf-b162-993623dd5257 req-b4275375-e6e0-4061-925f-06125d2421f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:08 np0005539552 nova_compute[233724]: 2025-11-29 08:29:08.059 233728 DEBUG oslo_concurrency.lockutils [req-90137ffd-fd7b-40bf-b162-993623dd5257 req-b4275375-e6e0-4061-925f-06125d2421f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:08 np0005539552 nova_compute[233724]: 2025-11-29 08:29:08.059 233728 DEBUG oslo_concurrency.lockutils [req-90137ffd-fd7b-40bf-b162-993623dd5257 req-b4275375-e6e0-4061-925f-06125d2421f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:08 np0005539552 nova_compute[233724]: 2025-11-29 08:29:08.060 233728 DEBUG oslo_concurrency.lockutils [req-90137ffd-fd7b-40bf-b162-993623dd5257 req-b4275375-e6e0-4061-925f-06125d2421f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:08 np0005539552 nova_compute[233724]: 2025-11-29 08:29:08.060 233728 DEBUG nova.compute.manager [req-90137ffd-fd7b-40bf-b162-993623dd5257 req-b4275375-e6e0-4061-925f-06125d2421f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] No waiting events found dispatching network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:08 np0005539552 nova_compute[233724]: 2025-11-29 08:29:08.060 233728 WARNING nova.compute.manager [req-90137ffd-fd7b-40bf-b162-993623dd5257 req-b4275375-e6e0-4061-925f-06125d2421f4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received unexpected event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:08.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:09.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:09 np0005539552 nova_compute[233724]: 2025-11-29 08:29:09.462 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:09 np0005539552 nova_compute[233724]: 2025-11-29 08:29:09.806 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404934.8056343, bb261893-bfa1-4fdc-9c11-a33a733337ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:09 np0005539552 nova_compute[233724]: 2025-11-29 08:29:09.807 233728 INFO nova.compute.manager [-] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:29:09 np0005539552 nova_compute[233724]: 2025-11-29 08:29:09.858 233728 DEBUG nova.compute.manager [None req-76b88168-f32f-4721-9336-38291ed3c841 - - - - - -] [instance: bb261893-bfa1-4fdc-9c11-a33a733337ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:10.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:11.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:11 np0005539552 nova_compute[233724]: 2025-11-29 08:29:11.637 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:11Z|00697|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:29:11 np0005539552 nova_compute[233724]: 2025-11-29 08:29:11.930 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:12Z|00698|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:29:12 np0005539552 nova_compute[233724]: 2025-11-29 08:29:12.105 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:12.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.148 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.149 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.168 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.253 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.254 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.263 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.263 233728 INFO nova.compute.claims [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:29:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:13.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.404 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2708254378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.876 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.884 233728 DEBUG nova.compute.provider_tree [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.938 233728 DEBUG nova.scheduler.client.report [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.964 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:13 np0005539552 nova_compute[233724]: 2025-11-29 08:29:13.966 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.018 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.018 233728 DEBUG nova.network.neutron [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.049 233728 INFO nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.067 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.168 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.169 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.170 233728 INFO nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Creating image(s)#033[00m
Nov 29 03:29:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:14.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.207 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.248 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.291 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.300 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.379 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.380 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.380 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.381 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.405 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.408 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8ba722d8-f0b0-426b-a972-888ebce61a32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.672 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8ba722d8-f0b0-426b-a972-888ebce61a32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.751 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] resizing rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.786 233728 DEBUG nova.policy [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64b11a4dc36b4f55b85dbe846183be55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae71059d02774857be85797a3be0e4e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.843 233728 DEBUG nova.objects.instance [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 8ba722d8-f0b0-426b-a972-888ebce61a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.861 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.862 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Ensure instance console log exists: /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.862 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.863 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:14 np0005539552 nova_compute[233724]: 2025-11-29 08:29:14.863 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:15 np0005539552 nova_compute[233724]: 2025-11-29 08:29:15.992 233728 DEBUG nova.network.neutron [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Successfully created port: a8931757-80f9-4e61-80ae-e6d2f1fc0dde _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:29:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:16.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:16 np0005539552 nova_compute[233724]: 2025-11-29 08:29:16.640 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:17.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:17 np0005539552 nova_compute[233724]: 2025-11-29 08:29:17.503 233728 DEBUG nova.network.neutron [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Successfully updated port: a8931757-80f9-4e61-80ae-e6d2f1fc0dde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:29:17 np0005539552 nova_compute[233724]: 2025-11-29 08:29:17.526 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-8ba722d8-f0b0-426b-a972-888ebce61a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:17 np0005539552 nova_compute[233724]: 2025-11-29 08:29:17.527 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-8ba722d8-f0b0-426b-a972-888ebce61a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:17 np0005539552 nova_compute[233724]: 2025-11-29 08:29:17.528 233728 DEBUG nova.network.neutron [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:17 np0005539552 nova_compute[233724]: 2025-11-29 08:29:17.810 233728 DEBUG nova.network.neutron [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:29:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:18.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:19 np0005539552 podman[296906]: 2025-11-29 08:29:19.001185228 +0000 UTC m=+0.092893280 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 03:29:19 np0005539552 podman[296907]: 2025-11-29 08:29:19.003876131 +0000 UTC m=+0.086919450 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.010 233728 DEBUG nova.network.neutron [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Updating instance_info_cache with network_info: [{"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:19 np0005539552 podman[296908]: 2025-11-29 08:29:19.015596186 +0000 UTC m=+0.093251710 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.039 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-8ba722d8-f0b0-426b-a972-888ebce61a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.039 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Instance network_info: |[{"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.041 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Start _get_guest_xml network_info=[{"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.045 233728 WARNING nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.051 233728 DEBUG nova.virt.libvirt.host [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.051 233728 DEBUG nova.virt.libvirt.host [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.067 233728 DEBUG nova.virt.libvirt.host [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.068 233728 DEBUG nova.virt.libvirt.host [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.069 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.069 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.069 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.070 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.070 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.070 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.070 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.070 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.071 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.071 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.071 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.071 233728 DEBUG nova.virt.hardware [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.074 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:19Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:b7:ca 10.100.0.5
Nov 29 03:29:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:19Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b7:ca 10.100.0.5
Nov 29 03:29:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:19.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.346 233728 DEBUG nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-changed-a8931757-80f9-4e61-80ae-e6d2f1fc0dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.346 233728 DEBUG nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Refreshing instance network info cache due to event network-changed-a8931757-80f9-4e61-80ae-e6d2f1fc0dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.347 233728 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8ba722d8-f0b0-426b-a972-888ebce61a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.347 233728 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8ba722d8-f0b0-426b-a972-888ebce61a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.347 233728 DEBUG nova.network.neutron [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Refreshing network info cache for port a8931757-80f9-4e61-80ae-e6d2f1fc0dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.467 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2851023822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.499 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.521 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.526 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 03:29:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/211303343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.954 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.957 233728 DEBUG nova.virt.libvirt.vif [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1137448026',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1137448026',id=148,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-brhr6vp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:14Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=8ba722d8-f0b0-426b-a972-888ebce61a32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.958 233728 DEBUG nova.network.os_vif_util [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.959 233728 DEBUG nova.network.os_vif_util [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.961 233728 DEBUG nova.objects.instance [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ba722d8-f0b0-426b-a972-888ebce61a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.980 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <uuid>8ba722d8-f0b0-426b-a972-888ebce61a32</uuid>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <name>instance-00000094</name>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1137448026</nova:name>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:29:19</nova:creationTime>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:user uuid="64b11a4dc36b4f55b85dbe846183be55">tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member</nova:user>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:project uuid="ae71059d02774857be85797a3be0e4e6">tempest-ServerBootFromVolumeStableRescueTest-1715153470</nova:project>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <nova:port uuid="a8931757-80f9-4e61-80ae-e6d2f1fc0dde">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <entry name="serial">8ba722d8-f0b0-426b-a972-888ebce61a32</entry>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <entry name="uuid">8ba722d8-f0b0-426b-a972-888ebce61a32</entry>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8ba722d8-f0b0-426b-a972-888ebce61a32_disk">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8ba722d8-f0b0-426b-a972-888ebce61a32_disk.config">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:41:b1:70"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <target dev="tapa8931757-80"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/console.log" append="off"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:29:19 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:29:19 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:29:19 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:29:19 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.981 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Preparing to wait for external event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.982 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.982 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.983 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.983 233728 DEBUG nova.virt.libvirt.vif [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1137448026',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1137448026',id=148,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-brhr6vp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:14Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=8ba722d8-f0b0-426b-a972-888ebce61a32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.984 233728 DEBUG nova.network.os_vif_util [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.985 233728 DEBUG nova.network.os_vif_util [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.985 233728 DEBUG os_vif [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.986 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.987 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.992 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.992 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8931757-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.993 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8931757-80, col_values=(('external_ids', {'iface-id': 'a8931757-80f9-4e61-80ae-e6d2f1fc0dde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:b1:70', 'vm-uuid': '8ba722d8-f0b0-426b-a972-888ebce61a32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.994 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:19 np0005539552 NetworkManager[48926]: <info>  [1764404959.9950] manager: (tapa8931757-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 29 03:29:19 np0005539552 nova_compute[233724]: 2025-11-29 08:29:19.997 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.004 233728 INFO os_vif [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80')#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.058 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.059 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.059 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No VIF found with MAC fa:16:3e:41:b1:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.060 233728 INFO nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Using config drive#033[00m
Nov 29 03:29:20 np0005539552 nova_compute[233724]: 2025-11-29 08:29:20.091 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:20.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:20.634 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:20.635 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:20.635 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:21 np0005539552 nova_compute[233724]: 2025-11-29 08:29:21.198 233728 INFO nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Creating config drive at /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/disk.config#033[00m
Nov 29 03:29:21 np0005539552 nova_compute[233724]: 2025-11-29 08:29:21.204 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6mluvjm6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:21 np0005539552 nova_compute[233724]: 2025-11-29 08:29:21.363 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6mluvjm6" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:21 np0005539552 nova_compute[233724]: 2025-11-29 08:29:21.399 233728 DEBUG nova.storage.rbd_utils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image 8ba722d8-f0b0-426b-a972-888ebce61a32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:21 np0005539552 nova_compute[233724]: 2025-11-29 08:29:21.405 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/disk.config 8ba722d8-f0b0-426b-a972-888ebce61a32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:21 np0005539552 nova_compute[233724]: 2025-11-29 08:29:21.642 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:22 np0005539552 nova_compute[233724]: 2025-11-29 08:29:22.096 233728 DEBUG nova.network.neutron [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Updated VIF entry in instance network info cache for port a8931757-80f9-4e61-80ae-e6d2f1fc0dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:22 np0005539552 nova_compute[233724]: 2025-11-29 08:29:22.097 233728 DEBUG nova.network.neutron [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Updating instance_info_cache with network_info: [{"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:22 np0005539552 nova_compute[233724]: 2025-11-29 08:29:22.116 233728 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8ba722d8-f0b0-426b-a972-888ebce61a32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:22.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:22 np0005539552 nova_compute[233724]: 2025-11-29 08:29:22.985 233728 DEBUG oslo_concurrency.processutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/disk.config 8ba722d8-f0b0-426b-a972-888ebce61a32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:22 np0005539552 nova_compute[233724]: 2025-11-29 08:29:22.985 233728 INFO nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Deleting local config drive /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:23 np0005539552 NetworkManager[48926]: <info>  [1764404963.0324] manager: (tapa8931757-80): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Nov 29 03:29:23 np0005539552 kernel: tapa8931757-80: entered promiscuous mode
Nov 29 03:29:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:23Z|00699|binding|INFO|Claiming lport a8931757-80f9-4e61-80ae-e6d2f1fc0dde for this chassis.
Nov 29 03:29:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:23Z|00700|binding|INFO|a8931757-80f9-4e61-80ae-e6d2f1fc0dde: Claiming fa:16:3e:41:b1:70 10.100.0.12
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.034 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.043 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:b1:70 10.100.0.12'], port_security=['fa:16:3e:41:b1:70 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8ba722d8-f0b0-426b-a972-888ebce61a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a8931757-80f9-4e61-80ae-e6d2f1fc0dde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.044 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a8931757-80f9-4e61-80ae-e6d2f1fc0dde in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.046 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.056 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc55d48-68fd-4f3e-936d-3deac44aa7fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.057 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9d41f0a-11 in ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.059 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9d41f0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.059 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[91276606-ec61-49f7-9901-e0c4b3b2a526]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.060 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5b892e-885c-4cb9-a9be-be67f7020333]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 systemd-machined[196379]: New machine qemu-70-instance-00000094.
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.072 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[dc071a8c-2c73-460e-9381-f19f3d952a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 systemd[1]: Started Virtual Machine qemu-70-instance-00000094.
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.095 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f81a1ab8-e7e9-41ad-b5d4-72a40520fd2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 systemd-udevd[297163]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.105 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:23Z|00701|binding|INFO|Setting lport a8931757-80f9-4e61-80ae-e6d2f1fc0dde ovn-installed in OVS
Nov 29 03:29:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:23Z|00702|binding|INFO|Setting lport a8931757-80f9-4e61-80ae-e6d2f1fc0dde up in Southbound
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.112 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 NetworkManager[48926]: <info>  [1764404963.1143] device (tapa8931757-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:23 np0005539552 NetworkManager[48926]: <info>  [1764404963.1152] device (tapa8931757-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.127 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[98f73dff-7dfd-4394-be88-ccdb4a697b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.131 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d3667639-335d-47fa-aba3-18b8f5a48830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 NetworkManager[48926]: <info>  [1764404963.1327] manager: (tapd9d41f0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.165 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c65415b3-4099-4d22-b9df-07b49d27dd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.168 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3aa52f-3e76-4a0e-8988-82eaddd52112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:23 np0005539552 NetworkManager[48926]: <info>  [1764404963.1911] device (tapd9d41f0a-10): carrier: link connected
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.196 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5d70d1-d5ea-490f-bdc0-215e28104717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.215 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4de6f5-8ce9-49bf-9cee-0c803efc3947]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800887, 'reachable_time': 24489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297193, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.228 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2584ad-9979-4891-9a71-59604e74f61e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:2887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800887, 'tstamp': 800887}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297194, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.244 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d84d5869-af39-44b2-a5df-4dd88ed702fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800887, 'reachable_time': 24489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297195, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.271 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1d491e-34e0-4c64-b20a-538db7b90ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:23.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.322 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[98702e99-68d1-4019-a7a8-aa23b16437ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.323 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.324 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.324 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.325 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 kernel: tapd9d41f0a-10: entered promiscuous mode
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.328 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 NetworkManager[48926]: <info>  [1764404963.3296] manager: (tapd9d41f0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.329 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.331 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:23Z|00703|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.347 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.348 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.349 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[911ab5c9-89f7-4950-90ba-0f45266bed60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.349 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:23.350 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'env', 'PROCESS_TAG=haproxy-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9d41f0a-17f9-4df4-a453-04da996d63b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.567 233728 DEBUG nova.compute.manager [req-916c95af-9a56-4e3c-859a-9dbb05b00850 req-62683f34-90d6-46cb-86fd-8aa923aec354 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.568 233728 DEBUG oslo_concurrency.lockutils [req-916c95af-9a56-4e3c-859a-9dbb05b00850 req-62683f34-90d6-46cb-86fd-8aa923aec354 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.568 233728 DEBUG oslo_concurrency.lockutils [req-916c95af-9a56-4e3c-859a-9dbb05b00850 req-62683f34-90d6-46cb-86fd-8aa923aec354 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.568 233728 DEBUG oslo_concurrency.lockutils [req-916c95af-9a56-4e3c-859a-9dbb05b00850 req-62683f34-90d6-46cb-86fd-8aa923aec354 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.569 233728 DEBUG nova.compute.manager [req-916c95af-9a56-4e3c-859a-9dbb05b00850 req-62683f34-90d6-46cb-86fd-8aa923aec354 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Processing event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:23 np0005539552 podman[297242]: 2025-11-29 08:29:23.69504155 +0000 UTC m=+0.049336338 container create 389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:23 np0005539552 systemd[1]: Started libpod-conmon-389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c.scope.
Nov 29 03:29:23 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:29:23 np0005539552 podman[297242]: 2025-11-29 08:29:23.670029817 +0000 UTC m=+0.024324645 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:23 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b81d924149ed42138083015f2e5c9ced140fc4f08b81b8acc1d1cf03cf296d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:23 np0005539552 podman[297242]: 2025-11-29 08:29:23.780492339 +0000 UTC m=+0.134787157 container init 389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:29:23 np0005539552 podman[297242]: 2025-11-29 08:29:23.788114044 +0000 UTC m=+0.142408832 container start 389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:29:23 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [NOTICE]   (297288) : New worker (297290) forked
Nov 29 03:29:23 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [NOTICE]   (297288) : Loading success.
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.869 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404963.8693724, 8ba722d8-f0b0-426b-a972-888ebce61a32 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.870 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.871 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.874 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.877 233728 INFO nova.virt.libvirt.driver [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Instance spawned successfully.#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.877 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.908 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.916 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.919 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.920 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.920 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.920 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.921 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.921 233728 DEBUG nova.virt.libvirt.driver [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.982 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.982 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404963.8694746, 8ba722d8-f0b0-426b-a972-888ebce61a32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:23 np0005539552 nova_compute[233724]: 2025-11-29 08:29:23.983 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.018 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.021 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404963.8739147, 8ba722d8-f0b0-426b-a972-888ebce61a32 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.021 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.043 233728 INFO nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Took 9.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.044 233728 DEBUG nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.045 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.053 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.096 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.137 233728 INFO nova.compute.manager [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Took 10.92 seconds to build instance.#033[00m
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.152 233728 DEBUG oslo_concurrency.lockutils [None req-38da5b81-1f05-43b4-a47e-b5f2b2f703ec 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:24.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:24 np0005539552 nova_compute[233724]: 2025-11-29 08:29:24.995 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:25.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:25 np0005539552 nova_compute[233724]: 2025-11-29 08:29:25.673 233728 DEBUG nova.compute.manager [req-d5b2b04c-5ff0-41f8-a934-9e98404c16ac req-e54b81ac-3276-4ef6-a059-7f478f922ee5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:25 np0005539552 nova_compute[233724]: 2025-11-29 08:29:25.673 233728 DEBUG oslo_concurrency.lockutils [req-d5b2b04c-5ff0-41f8-a934-9e98404c16ac req-e54b81ac-3276-4ef6-a059-7f478f922ee5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:25 np0005539552 nova_compute[233724]: 2025-11-29 08:29:25.674 233728 DEBUG oslo_concurrency.lockutils [req-d5b2b04c-5ff0-41f8-a934-9e98404c16ac req-e54b81ac-3276-4ef6-a059-7f478f922ee5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:25 np0005539552 nova_compute[233724]: 2025-11-29 08:29:25.674 233728 DEBUG oslo_concurrency.lockutils [req-d5b2b04c-5ff0-41f8-a934-9e98404c16ac req-e54b81ac-3276-4ef6-a059-7f478f922ee5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:25 np0005539552 nova_compute[233724]: 2025-11-29 08:29:25.674 233728 DEBUG nova.compute.manager [req-d5b2b04c-5ff0-41f8-a934-9e98404c16ac req-e54b81ac-3276-4ef6-a059-7f478f922ee5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] No waiting events found dispatching network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:25 np0005539552 nova_compute[233724]: 2025-11-29 08:29:25.674 233728 WARNING nova.compute.manager [req-d5b2b04c-5ff0-41f8-a934-9e98404c16ac req-e54b81ac-3276-4ef6-a059-7f478f922ee5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received unexpected event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:26.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:26 np0005539552 nova_compute[233724]: 2025-11-29 08:29:26.645 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:26 np0005539552 nova_compute[233724]: 2025-11-29 08:29:26.910 233728 DEBUG nova.compute.manager [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:26 np0005539552 nova_compute[233724]: 2025-11-29 08:29:26.948 233728 INFO nova.compute.manager [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] instance snapshotting#033[00m
Nov 29 03:29:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 03:29:27 np0005539552 nova_compute[233724]: 2025-11-29 08:29:27.257 233728 INFO nova.virt.libvirt.driver [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Beginning live snapshot process#033[00m
Nov 29 03:29:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:27.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:27 np0005539552 nova_compute[233724]: 2025-11-29 08:29:27.438 233728 DEBUG nova.virt.libvirt.imagebackend [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:29:27 np0005539552 nova_compute[233724]: 2025-11-29 08:29:27.784 233728 DEBUG nova.storage.rbd_utils [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] creating snapshot(87d2391fc6874800ab8c7af1d40a3950) on rbd image(8ba722d8-f0b0-426b-a972-888ebce61a32_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:29:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:28.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 03:29:28 np0005539552 nova_compute[233724]: 2025-11-29 08:29:28.289 233728 DEBUG nova.storage.rbd_utils [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] cloning vms/8ba722d8-f0b0-426b-a972-888ebce61a32_disk@87d2391fc6874800ab8c7af1d40a3950 to images/2d9af198-baca-4fb0-8bbb-100141aac9db clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:29:28 np0005539552 nova_compute[233724]: 2025-11-29 08:29:28.394 233728 DEBUG nova.storage.rbd_utils [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] flattening images/2d9af198-baca-4fb0-8bbb-100141aac9db flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:29:28 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Nov 29 03:29:28 np0005539552 nova_compute[233724]: 2025-11-29 08:29:28.658 233728 DEBUG nova.storage.rbd_utils [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] removing snapshot(87d2391fc6874800ab8c7af1d40a3950) on rbd image(8ba722d8-f0b0-426b-a972-888ebce61a32_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:29:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 03:29:29 np0005539552 nova_compute[233724]: 2025-11-29 08:29:29.353 233728 DEBUG nova.storage.rbd_utils [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] creating snapshot(snap) on rbd image(2d9af198-baca-4fb0-8bbb-100141aac9db) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:29:29 np0005539552 nova_compute[233724]: 2025-11-29 08:29:29.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:30.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 03:29:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:31 np0005539552 nova_compute[233724]: 2025-11-29 08:29:31.648 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:32.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:32 np0005539552 nova_compute[233724]: 2025-11-29 08:29:32.607 233728 INFO nova.virt.libvirt.driver [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Snapshot image upload complete#033[00m
Nov 29 03:29:32 np0005539552 nova_compute[233724]: 2025-11-29 08:29:32.608 233728 INFO nova.compute.manager [None req-d9a7c1c4-6630-4965-8b41-5fadfe1ea648 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Took 5.66 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:29:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:33.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:34.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:34 np0005539552 nova_compute[233724]: 2025-11-29 08:29:34.998 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:35 np0005539552 nova_compute[233724]: 2025-11-29 08:29:35.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:35 np0005539552 nova_compute[233724]: 2025-11-29 08:29:35.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:35 np0005539552 nova_compute[233724]: 2025-11-29 08:29:35.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:35 np0005539552 nova_compute[233724]: 2025-11-29 08:29:35.950 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:35 np0005539552 nova_compute[233724]: 2025-11-29 08:29:35.951 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:29:35 np0005539552 nova_compute[233724]: 2025-11-29 08:29:35.951 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3488674039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.425 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.512 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.513 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.517 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.517 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.649 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.701 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.702 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3907MB free_disk=20.817386627197266GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.703 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.703 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.856 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 101e7b80-d529-4f2a-87df-44512ead5b00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.856 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 8ba722d8-f0b0-426b-a972-888ebce61a32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.857 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.857 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:29:36 np0005539552 nova_compute[233724]: 2025-11-29 08:29:36.907 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:37.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3816601632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:37 np0005539552 nova_compute[233724]: 2025-11-29 08:29:37.353 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:37 np0005539552 nova_compute[233724]: 2025-11-29 08:29:37.361 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:37 np0005539552 nova_compute[233724]: 2025-11-29 08:29:37.396 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:38 np0005539552 nova_compute[233724]: 2025-11-29 08:29:38.382 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:29:38 np0005539552 nova_compute[233724]: 2025-11-29 08:29:38.383 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 03:29:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:39.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:39Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:b1:70 10.100.0.12
Nov 29 03:29:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:39Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:b1:70 10.100.0.12
Nov 29 03:29:40 np0005539552 nova_compute[233724]: 2025-11-29 08:29:40.001 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:41.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:41 np0005539552 nova_compute[233724]: 2025-11-29 08:29:41.651 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:42.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.382 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.384 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.384 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.385 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.385 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.386 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:42 np0005539552 nova_compute[233724]: 2025-11-29 08:29:42.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:29:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:45 np0005539552 nova_compute[233724]: 2025-11-29 08:29:45.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.098 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.099 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.099 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.099 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:29:46 np0005539552 podman[297730]: 2025-11-29 08:29:46.170226587 +0000 UTC m=+0.127976144 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.193 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.193 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.259 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:29:46 np0005539552 podman[297730]: 2025-11-29 08:29:46.310997965 +0000 UTC m=+0.268747522 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.386 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.387 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.387 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.387 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 101e7b80-d529-4f2a-87df-44512ead5b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.548 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.548 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.553 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.554 233728 INFO nova.compute.claims [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.654 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539552 nova_compute[233724]: 2025-11-29 08:29:46.823 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:47 np0005539552 podman[297896]: 2025-11-29 08:29:47.093681234 +0000 UTC m=+0.099935670 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:29:47 np0005539552 podman[297896]: 2025-11-29 08:29:47.246034263 +0000 UTC m=+0.252288669 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:29:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4127541899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.298 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.307 233728 DEBUG nova.compute.provider_tree [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.384 233728 DEBUG nova.scheduler.client.report [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.466 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.467 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.567 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.568 233728 DEBUG nova.network.neutron [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.692 233728 INFO nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:29:47 np0005539552 podman[297971]: 2025-11-29 08:29:47.700612524 +0000 UTC m=+0.117643426 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, vcs-type=git, architecture=x86_64, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, distribution-scope=public, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=)
Nov 29 03:29:47 np0005539552 podman[297971]: 2025-11-29 08:29:47.764981406 +0000 UTC m=+0.182012258 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, vcs-type=git, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.778 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.931 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.933 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.933 233728 INFO nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Creating image(s)#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.966 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:47 np0005539552 nova_compute[233724]: 2025-11-29 08:29:47.996 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.022 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.026 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.066 233728 DEBUG nova.policy [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.089 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updating instance_info_cache with network_info: [{"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.120 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.121 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.121 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.122 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.187 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.190 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1c33da68-e01e-42a2-8769-021721b7b0f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.220 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.221 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:29:48 np0005539552 nova_compute[233724]: 2025-11-29 08:29:48.752 233728 DEBUG nova.network.neutron [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Successfully created port: e1f7780a-b948-4e6c-85f7-a9c35a0536a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.162 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1c33da68-e01e-42a2-8769-021721b7b0f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.971s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.230 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:29:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.875 233728 DEBUG nova.objects.instance [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c33da68-e01e-42a2-8769-021721b7b0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.891 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.891 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Ensure instance console log exists: /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.892 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.892 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:49 np0005539552 nova_compute[233724]: 2025-11-29 08:29:49.892 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:49 np0005539552 podman[298302]: 2025-11-29 08:29:49.995361184 +0000 UTC m=+0.076754846 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:50 np0005539552 nova_compute[233724]: 2025-11-29 08:29:50.004 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:50 np0005539552 podman[298303]: 2025-11-29 08:29:50.018743793 +0000 UTC m=+0.092445088 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:29:50 np0005539552 podman[298304]: 2025-11-29 08:29:50.019388921 +0000 UTC m=+0.093501597 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:50.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:50 np0005539552 nova_compute[233724]: 2025-11-29 08:29:50.499 233728 DEBUG nova.network.neutron [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Successfully updated port: e1f7780a-b948-4e6c-85f7-a9c35a0536a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:29:50 np0005539552 nova_compute[233724]: 2025-11-29 08:29:50.594 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-1c33da68-e01e-42a2-8769-021721b7b0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:50 np0005539552 nova_compute[233724]: 2025-11-29 08:29:50.595 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-1c33da68-e01e-42a2-8769-021721b7b0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:50 np0005539552 nova_compute[233724]: 2025-11-29 08:29:50.595 233728 DEBUG nova.network.neutron [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:29:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:50 np0005539552 nova_compute[233724]: 2025-11-29 08:29:50.804 233728 DEBUG nova.network.neutron [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:29:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:51 np0005539552 nova_compute[233724]: 2025-11-29 08:29:51.657 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:52 np0005539552 nova_compute[233724]: 2025-11-29 08:29:52.942 233728 DEBUG nova.compute.manager [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-changed-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:52 np0005539552 nova_compute[233724]: 2025-11-29 08:29:52.942 233728 DEBUG nova.compute.manager [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Refreshing instance network info cache due to event network-changed-e1f7780a-b948-4e6c-85f7-a9c35a0536a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:52 np0005539552 nova_compute[233724]: 2025-11-29 08:29:52.942 233728 DEBUG oslo_concurrency.lockutils [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1c33da68-e01e-42a2-8769-021721b7b0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:53.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.603 233728 DEBUG nova.network.neutron [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Updating instance_info_cache with network_info: [{"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.630 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-1c33da68-e01e-42a2-8769-021721b7b0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.630 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Instance network_info: |[{"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.631 233728 DEBUG oslo_concurrency.lockutils [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1c33da68-e01e-42a2-8769-021721b7b0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.631 233728 DEBUG nova.network.neutron [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Refreshing network info cache for port e1f7780a-b948-4e6c-85f7-a9c35a0536a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.634 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Start _get_guest_xml network_info=[{"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.639 233728 WARNING nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.645 233728 DEBUG nova.virt.libvirt.host [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.646 233728 DEBUG nova.virt.libvirt.host [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.653 233728 DEBUG nova.virt.libvirt.host [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.654 233728 DEBUG nova.virt.libvirt.host [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.655 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.655 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.656 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.656 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.657 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.657 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.657 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.657 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.658 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.658 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.658 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.659 233728 DEBUG nova.virt.hardware [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:53 np0005539552 nova_compute[233724]: 2025-11-29 08:29:53.663 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/239670346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.071 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.098 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.103 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3595116210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.523 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.526 233728 DEBUG nova.virt.libvirt.vif [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1316428910',display_name='tempest-ServersTestJSON-server-1316428910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1316428910',id=152,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSUSGedI5apqCgtWOmW9LYyu2hIt9SRb+qDhHjjsPNl5IeknjzqZbJNFhr0MnGiXAVE7b8aI70WMiPJPg9b1eL7NBtAAlxVSQHQl1eDK8YDvxqlG7ySwaPXoh+d+sAOOQ==',key_name='tempest-key-399192811',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-3cqsasbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:47Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=1c33da68-e01e-42a2-8769-021721b7b0f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.527 233728 DEBUG nova.network.os_vif_util [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.529 233728 DEBUG nova.network.os_vif_util [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.531 233728 DEBUG nova.objects.instance [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c33da68-e01e-42a2-8769-021721b7b0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.548 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <uuid>1c33da68-e01e-42a2-8769-021721b7b0f4</uuid>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <name>instance-00000098</name>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServersTestJSON-server-1316428910</nova:name>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:29:53</nova:creationTime>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <nova:port uuid="e1f7780a-b948-4e6c-85f7-a9c35a0536a5">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <entry name="serial">1c33da68-e01e-42a2-8769-021721b7b0f4</entry>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <entry name="uuid">1c33da68-e01e-42a2-8769-021721b7b0f4</entry>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/1c33da68-e01e-42a2-8769-021721b7b0f4_disk">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/1c33da68-e01e-42a2-8769-021721b7b0f4_disk.config">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:5a:67:4d"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <target dev="tape1f7780a-b9"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/console.log" append="off"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:29:54 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:29:54 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:29:54 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:29:54 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.549 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Preparing to wait for external event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.550 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.550 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.550 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.551 233728 DEBUG nova.virt.libvirt.vif [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1316428910',display_name='tempest-ServersTestJSON-server-1316428910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1316428910',id=152,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSUSGedI5apqCgtWOmW9LYyu2hIt9SRb+qDhHjjsPNl5IeknjzqZbJNFhr0MnGiXAVE7b8aI70WMiPJPg9b1eL7NBtAAlxVSQHQl1eDK8YDvxqlG7ySwaPXoh+d+sAOOQ==',key_name='tempest-key-399192811',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-3cqsasbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:47Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=1c33da68-e01e-42a2-8769-021721b7b0f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.551 233728 DEBUG nova.network.os_vif_util [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.552 233728 DEBUG nova.network.os_vif_util [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.552 233728 DEBUG os_vif [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.553 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.554 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.556 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1f7780a-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.557 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1f7780a-b9, col_values=(('external_ids', {'iface-id': 'e1f7780a-b948-4e6c-85f7-a9c35a0536a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:67:4d', 'vm-uuid': '1c33da68-e01e-42a2-8769-021721b7b0f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539552 NetworkManager[48926]: <info>  [1764404994.5598] manager: (tape1f7780a-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.564 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.565 233728 INFO os_vif [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9')#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.637 233728 DEBUG nova.network.neutron [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Updated VIF entry in instance network info cache for port e1f7780a-b948-4e6c-85f7-a9c35a0536a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.638 233728 DEBUG nova.network.neutron [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Updating instance_info_cache with network_info: [{"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:54 np0005539552 nova_compute[233724]: 2025-11-29 08:29:54.652 233728 DEBUG oslo_concurrency.lockutils [req-5fcc6e8b-45e8-491e-b823-41896ed5fdb1 req-cbe3e655-103b-4b4b-bd96-4c9289c522c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1c33da68-e01e-42a2-8769-021721b7b0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:55 np0005539552 nova_compute[233724]: 2025-11-29 08:29:55.381 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:55 np0005539552 nova_compute[233724]: 2025-11-29 08:29:55.381 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:55 np0005539552 nova_compute[233724]: 2025-11-29 08:29:55.381 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:5a:67:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:55 np0005539552 nova_compute[233724]: 2025-11-29 08:29:55.382 233728 INFO nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Using config drive#033[00m
Nov 29 03:29:55 np0005539552 nova_compute[233724]: 2025-11-29 08:29:55.405 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:56 np0005539552 nova_compute[233724]: 2025-11-29 08:29:56.095 233728 INFO nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Creating config drive at /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/disk.config#033[00m
Nov 29 03:29:56 np0005539552 nova_compute[233724]: 2025-11-29 08:29:56.101 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9g7cpfr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:56 np0005539552 nova_compute[233724]: 2025-11-29 08:29:56.242 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9g7cpfr" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:56 np0005539552 nova_compute[233724]: 2025-11-29 08:29:56.271 233728 DEBUG nova.storage.rbd_utils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 1c33da68-e01e-42a2-8769-021721b7b0f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:56 np0005539552 nova_compute[233724]: 2025-11-29 08:29:56.275 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/disk.config 1c33da68-e01e-42a2-8769-021721b7b0f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:56 np0005539552 nova_compute[233724]: 2025-11-29 08:29:56.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1265976792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:57.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.197 233728 DEBUG oslo_concurrency.processutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/disk.config 1c33da68-e01e-42a2-8769-021721b7b0f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.922s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.198 233728 INFO nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Deleting local config drive /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:58 np0005539552 kernel: tape1f7780a-b9: entered promiscuous mode
Nov 29 03:29:58 np0005539552 NetworkManager[48926]: <info>  [1764404998.2604] manager: (tape1f7780a-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Nov 29 03:29:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:58Z|00704|binding|INFO|Claiming lport e1f7780a-b948-4e6c-85f7-a9c35a0536a5 for this chassis.
Nov 29 03:29:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:58Z|00705|binding|INFO|e1f7780a-b948-4e6c-85f7-a9c35a0536a5: Claiming fa:16:3e:5a:67:4d 10.100.0.8
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.263 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.271 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:67:4d 10.100.0.8'], port_security=['fa:16:3e:5a:67:4d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c33da68-e01e-42a2-8769-021721b7b0f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e1f7780a-b948-4e6c-85f7-a9c35a0536a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.274 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e1f7780a-b948-4e6c-85f7-a9c35a0536a5 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.277 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:29:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:58Z|00706|binding|INFO|Setting lport e1f7780a-b948-4e6c-85f7-a9c35a0536a5 ovn-installed in OVS
Nov 29 03:29:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:29:58Z|00707|binding|INFO|Setting lport e1f7780a-b948-4e6c-85f7-a9c35a0536a5 up in Southbound
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.284 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.287 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539552 systemd-machined[196379]: New machine qemu-71-instance-00000098.
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.297 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fabcfdfc-866d-4f67-af8d-fa02c521c12e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539552 systemd[1]: Started Virtual Machine qemu-71-instance-00000098.
Nov 29 03:29:58 np0005539552 systemd-udevd[298504]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.328 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[128f8c93-f0e2-44c0-a0f1-a0e2614f53d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539552 NetworkManager[48926]: <info>  [1764404998.3333] device (tape1f7780a-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.332 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[94fdcede-1ccd-4f3b-9081-208b8fae7638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539552 NetworkManager[48926]: <info>  [1764404998.3341] device (tape1f7780a-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.361 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1a6d21-32d0-4376-bcaa-13cfe1d8b059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.380 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6ddf1031-9825-4cb6-9935-b0962aa3aaee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799137, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298514, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.398 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3f8184-bfb4-4961-9fd2-a465ef7979a9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799152, 'tstamp': 799152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298516, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799155, 'tstamp': 799155}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298516, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.399 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.400 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.402 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.402 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.402 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:58.403 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.542 233728 DEBUG nova.compute.manager [req-771017e8-b967-4bd1-8f17-4bd032844b3a req-d1b9115b-fd0d-43aa-b151-e6e6105b664e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.542 233728 DEBUG oslo_concurrency.lockutils [req-771017e8-b967-4bd1-8f17-4bd032844b3a req-d1b9115b-fd0d-43aa-b151-e6e6105b664e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.543 233728 DEBUG oslo_concurrency.lockutils [req-771017e8-b967-4bd1-8f17-4bd032844b3a req-d1b9115b-fd0d-43aa-b151-e6e6105b664e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.543 233728 DEBUG oslo_concurrency.lockutils [req-771017e8-b967-4bd1-8f17-4bd032844b3a req-d1b9115b-fd0d-43aa-b151-e6e6105b664e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:58 np0005539552 nova_compute[233724]: 2025-11-29 08:29:58.543 233728 DEBUG nova.compute.manager [req-771017e8-b967-4bd1-8f17-4bd032844b3a req-d1b9115b-fd0d-43aa-b151-e6e6105b664e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Processing event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.002 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.003 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404999.0031557, 1c33da68-e01e-42a2-8769-021721b7b0f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.003 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.006 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.008 233728 INFO nova.virt.libvirt.driver [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Instance spawned successfully.#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.009 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.031 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.035 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.036 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.036 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.036 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.037 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.037 233728 DEBUG nova.virt.libvirt.driver [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.042 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.075 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.075 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404999.004103, 1c33da68-e01e-42a2-8769-021721b7b0f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.075 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.104 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.107 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764404999.0057614, 1c33da68-e01e-42a2-8769-021721b7b0f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.108 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.133 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.136 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.148 233728 INFO nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Took 11.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.149 233728 DEBUG nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.176 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.210 233728 INFO nova.compute.manager [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Took 12.76 seconds to build instance.#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.230 233728 DEBUG oslo_concurrency.lockutils [None req-6ff890e6-4408-4cc7-9a87-8354f5b633e0 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:29:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:59.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:59.498 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.499 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:29:59.499 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:29:59 np0005539552 nova_compute[233724]: 2025-11-29 08:29:59.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:00 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:30:00 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:30:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 03:30:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:00 np0005539552 nova_compute[233724]: 2025-11-29 08:30:00.833 233728 DEBUG nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:00 np0005539552 nova_compute[233724]: 2025-11-29 08:30:00.834 233728 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:00 np0005539552 nova_compute[233724]: 2025-11-29 08:30:00.834 233728 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:00 np0005539552 nova_compute[233724]: 2025-11-29 08:30:00.834 233728 DEBUG oslo_concurrency.lockutils [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539552 nova_compute[233724]: 2025-11-29 08:30:00.835 233728 DEBUG nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] No waiting events found dispatching network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:00 np0005539552 nova_compute[233724]: 2025-11-29 08:30:00.835 233728 WARNING nova.compute.manager [req-6af5b7a1-2739-476b-819d-5cb0bde11e7b req-274332cc-a4fc-43c6-a353-f023fd39a9a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received unexpected event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.581 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.582 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.582 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.582 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.582 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.583 233728 INFO nova.compute.manager [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Terminating instance#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.584 233728 DEBUG nova.compute.manager [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:01 np0005539552 kernel: tape1f7780a-b9 (unregistering): left promiscuous mode
Nov 29 03:30:01 np0005539552 NetworkManager[48926]: <info>  [1764405001.6262] device (tape1f7780a-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:01Z|00708|binding|INFO|Releasing lport e1f7780a-b948-4e6c-85f7-a9c35a0536a5 from this chassis (sb_readonly=0)
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.635 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:01Z|00709|binding|INFO|Setting lport e1f7780a-b948-4e6c-85f7-a9c35a0536a5 down in Southbound
Nov 29 03:30:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:01Z|00710|binding|INFO|Removing iface tape1f7780a-b9 ovn-installed in OVS
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.636 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.642 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:67:4d 10.100.0.8'], port_security=['fa:16:3e:5a:67:4d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c33da68-e01e-42a2-8769-021721b7b0f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e1f7780a-b948-4e6c-85f7-a9c35a0536a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.642 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e1f7780a-b948-4e6c-85f7-a9c35a0536a5 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.644 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.654 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.660 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d251ca5-712a-47cd-b505-1f19b8a0f1a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.661 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.691 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[83d75ba2-0d64-4639-8354-c2b819688ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.693 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e21d61b2-3f07-400f-8723-5cd3349019e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539552 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 29 03:30:01 np0005539552 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000098.scope: Consumed 3.317s CPU time.
Nov 29 03:30:01 np0005539552 systemd-machined[196379]: Machine qemu-71-instance-00000098 terminated.
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.726 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b4554e26-2897-4523-8c8a-5e458ef127be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.743 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4db79bc4-ce28-4bd4-8094-1e5f699f9284]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799137, 'reachable_time': 34095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298623, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.760 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[63ddf443-8a93-4aca-96fb-5b041aeecab4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799152, 'tstamp': 799152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298624, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799155, 'tstamp': 799155}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298624, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.762 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.763 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.767 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.767 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.768 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.768 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:01.768 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.828 233728 INFO nova.virt.libvirt.driver [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Instance destroyed successfully.#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.829 233728 DEBUG nova.objects.instance [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 1c33da68-e01e-42a2-8769-021721b7b0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.866 233728 DEBUG nova.virt.libvirt.vif [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1316428910',display_name='tempest-ServersTestJSON-server-1316428910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-1316428910',id=152,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSUSGedI5apqCgtWOmW9LYyu2hIt9SRb+qDhHjjsPNl5IeknjzqZbJNFhr0MnGiXAVE7b8aI70WMiPJPg9b1eL7NBtAAlxVSQHQl1eDK8YDvxqlG7ySwaPXoh+d+sAOOQ==',key_name='tempest-key-399192811',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-3cqsasbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:59Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=1c33da68-e01e-42a2-8769-021721b7b0f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.867 233728 DEBUG nova.network.os_vif_util [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "address": "fa:16:3e:5a:67:4d", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f7780a-b9", "ovs_interfaceid": "e1f7780a-b948-4e6c-85f7-a9c35a0536a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.867 233728 DEBUG nova.network.os_vif_util [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.868 233728 DEBUG os_vif [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.869 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f7780a-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539552 nova_compute[233724]: 2025-11-29 08:30:01.874 233728 INFO os_vif [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:67:4d,bridge_name='br-int',has_traffic_filtering=True,id=e1f7780a-b948-4e6c-85f7-a9c35a0536a5,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f7780a-b9')#033[00m
Nov 29 03:30:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:02.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.236 233728 INFO nova.virt.libvirt.driver [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Deleting instance files /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4_del#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.237 233728 INFO nova.virt.libvirt.driver [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Deletion of /var/lib/nova/instances/1c33da68-e01e-42a2-8769-021721b7b0f4_del complete#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.291 233728 INFO nova.compute.manager [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.291 233728 DEBUG oslo.service.loopingcall [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.291 233728 DEBUG nova.compute.manager [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.291 233728 DEBUG nova.network.neutron [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.931 233728 DEBUG nova.compute.manager [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-vif-unplugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.932 233728 DEBUG oslo_concurrency.lockutils [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.932 233728 DEBUG oslo_concurrency.lockutils [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.933 233728 DEBUG oslo_concurrency.lockutils [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.933 233728 DEBUG nova.compute.manager [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] No waiting events found dispatching network-vif-unplugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.933 233728 DEBUG nova.compute.manager [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-vif-unplugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.934 233728 DEBUG nova.compute.manager [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.934 233728 DEBUG oslo_concurrency.lockutils [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.934 233728 DEBUG oslo_concurrency.lockutils [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.935 233728 DEBUG oslo_concurrency.lockutils [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.935 233728 DEBUG nova.compute.manager [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] No waiting events found dispatching network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.935 233728 WARNING nova.compute.manager [req-5fa5d720-dc41-4afd-b5c0-cade8b425457 req-b8a0b831-4ac3-4a0f-9c8d-dc94b78b9a93 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received unexpected event network-vif-plugged-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:02 np0005539552 nova_compute[233724]: 2025-11-29 08:30:02.944 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.136 233728 DEBUG nova.network.neutron [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.155 233728 INFO nova.compute.manager [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Took 0.86 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.221 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.221 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.234 233728 DEBUG nova.compute.manager [req-27d0cb7f-1692-4238-b3bc-f154ee67ffe6 req-043f6180-fcd0-4982-b0f3-b5b3d4cc010f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Received event network-vif-deleted-e1f7780a-b948-4e6c-85f7-a9c35a0536a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.348 233728 DEBUG oslo_concurrency.processutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:03.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:03.502 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3393426337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.783 233728 DEBUG oslo_concurrency.processutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.791 233728 DEBUG nova.compute.provider_tree [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.812 233728 DEBUG nova.scheduler.client.report [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.841 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.873 233728 INFO nova.scheduler.client.report [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 1c33da68-e01e-42a2-8769-021721b7b0f4#033[00m
Nov 29 03:30:03 np0005539552 nova_compute[233724]: 2025-11-29 08:30:03.926 233728 DEBUG oslo_concurrency.lockutils [None req-bb99d9a0-24a9-4912-bff1-88e6e42dddda 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "1c33da68-e01e-42a2-8769-021721b7b0f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:06 np0005539552 nova_compute[233724]: 2025-11-29 08:30:06.664 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:06 np0005539552 nova_compute[233724]: 2025-11-29 08:30:06.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:06 np0005539552 nova_compute[233724]: 2025-11-29 08:30:06.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:10.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:11.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:11 np0005539552 nova_compute[233724]: 2025-11-29 08:30:11.665 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:11 np0005539552 nova_compute[233724]: 2025-11-29 08:30:11.875 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:12.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:13.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.409 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.409 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.448 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.530 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.531 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.539 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.539 233728 INFO nova.compute.claims [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:30:13 np0005539552 nova_compute[233724]: 2025-11-29 08:30:13.795 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:14.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1735808097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.283 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.291 233728 DEBUG nova.compute.provider_tree [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.310 233728 DEBUG nova.scheduler.client.report [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.333 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.334 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.382 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.383 233728 DEBUG nova.network.neutron [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.414 233728 INFO nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.439 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.542 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.546 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.547 233728 INFO nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Creating image(s)#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.593 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.632 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.674 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.680 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.737 233728 DEBUG nova.policy [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.755 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.756 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.756 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.757 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.801 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:14 np0005539552 nova_compute[233724]: 2025-11-29 08:30:14.807 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c6851fde-7355-4735-8410-73aadae465f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.079 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c6851fde-7355-4735-8410-73aadae465f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.146 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image c6851fde-7355-4735-8410-73aadae465f6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.242 233728 DEBUG nova.objects.instance [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.255 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.255 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Ensure instance console log exists: /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.255 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.256 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.256 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:15.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:15 np0005539552 nova_compute[233724]: 2025-11-29 08:30:15.961 233728 DEBUG nova.network.neutron [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Successfully created port: 3064c493-912e-4107-90c6-fd25cba7cf44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:30:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:16 np0005539552 nova_compute[233724]: 2025-11-29 08:30:16.668 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539552 nova_compute[233724]: 2025-11-29 08:30:16.828 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405001.8269346, 1c33da68-e01e-42a2-8769-021721b7b0f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539552 nova_compute[233724]: 2025-11-29 08:30:16.828 233728 INFO nova.compute.manager [-] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539552 nova_compute[233724]: 2025-11-29 08:30:16.847 233728 DEBUG nova.compute.manager [None req-a88ae883-3b04-4bdc-8fd9-babcfbdc57c3 - - - - - -] [instance: 1c33da68-e01e-42a2-8769-021721b7b0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539552 nova_compute[233724]: 2025-11-29 08:30:16.877 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:17.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:17 np0005539552 nova_compute[233724]: 2025-11-29 08:30:17.648 233728 DEBUG nova.network.neutron [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Successfully updated port: 3064c493-912e-4107-90c6-fd25cba7cf44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:30:17 np0005539552 nova_compute[233724]: 2025-11-29 08:30:17.669 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:17 np0005539552 nova_compute[233724]: 2025-11-29 08:30:17.670 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:17 np0005539552 nova_compute[233724]: 2025-11-29 08:30:17.670 233728 DEBUG nova.network.neutron [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:18 np0005539552 nova_compute[233724]: 2025-11-29 08:30:18.009 233728 DEBUG nova.network.neutron [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:18 np0005539552 nova_compute[233724]: 2025-11-29 08:30:18.520 233728 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-changed-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:18 np0005539552 nova_compute[233724]: 2025-11-29 08:30:18.520 233728 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Refreshing instance network info cache due to event network-changed-3064c493-912e-4107-90c6-fd25cba7cf44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:18 np0005539552 nova_compute[233724]: 2025-11-29 08:30:18.520 233728 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.602430) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018602781, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1824, "num_deletes": 262, "total_data_size": 4000958, "memory_usage": 4066640, "flush_reason": "Manual Compaction"}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018617098, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 2627036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54420, "largest_seqno": 56239, "table_properties": {"data_size": 2619315, "index_size": 4535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16959, "raw_average_key_size": 20, "raw_value_size": 2603552, "raw_average_value_size": 3167, "num_data_blocks": 197, "num_entries": 822, "num_filter_entries": 822, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404880, "oldest_key_time": 1764404880, "file_creation_time": 1764405018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 14680 microseconds, and 5685 cpu microseconds.
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.617134) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 2627036 bytes OK
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.617152) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619178) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619190) EVENT_LOG_v1 {"time_micros": 1764405018619186, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619207) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 3992524, prev total WAL file size 3992524, number of live WAL files 2.
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.620092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373539' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(2565KB)], [105(12MB)]
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018620117, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 15754170, "oldest_snapshot_seqno": -1}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8986 keys, 15596395 bytes, temperature: kUnknown
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018733048, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 15596395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15533210, "index_size": 39585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 232752, "raw_average_key_size": 25, "raw_value_size": 15370016, "raw_average_value_size": 1710, "num_data_blocks": 1559, "num_entries": 8986, "num_filter_entries": 8986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.733321) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 15596395 bytes
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.735820) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.4 rd, 138.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.5 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 9527, records dropped: 541 output_compression: NoCompression
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.735844) EVENT_LOG_v1 {"time_micros": 1764405018735833, "job": 66, "event": "compaction_finished", "compaction_time_micros": 113010, "compaction_time_cpu_micros": 35200, "output_level": 6, "num_output_files": 1, "total_output_size": 15596395, "num_input_records": 9527, "num_output_records": 8986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018736455, "job": 66, "event": "table_file_deletion", "file_number": 107}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018739830, "job": 66, "event": "table_file_deletion", "file_number": 105}
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.620010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.739886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.739892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.739894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.739897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:18.739899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.122 233728 DEBUG nova.network.neutron [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updating instance_info_cache with network_info: [{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.141 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.141 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance network_info: |[{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.141 233728 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.142 233728 DEBUG nova.network.neutron [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Refreshing network info cache for port 3064c493-912e-4107-90c6-fd25cba7cf44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.144 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Start _get_guest_xml network_info=[{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.148 233728 WARNING nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.152 233728 DEBUG nova.virt.libvirt.host [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.152 233728 DEBUG nova.virt.libvirt.host [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.157 233728 DEBUG nova.virt.libvirt.host [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.158 233728 DEBUG nova.virt.libvirt.host [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.158 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.159 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.159 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.159 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.159 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.160 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.160 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.160 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.160 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.160 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.161 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.161 233728 DEBUG nova.virt.hardware [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.163 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:19 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1394482548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.647 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.687 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:19 np0005539552 nova_compute[233724]: 2025-11-29 08:30:19.693 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/898145494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.129 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.130 233728 DEBUG nova.virt.libvirt.vif [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1997328567',display_name='tempest-TestNetworkAdvancedServerOps-server-1997328567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1997328567',id=154,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbTSkSAgnmnPzBXf4q5v2aQuSv4Xg726RvmfA2bFIyqLw5l1ShpVhV1m+XMa6BqUnaJ6e6Oj4O/ixK0Z4BjO5LUyfviwcT1zO/PwUOUOsCHcpb46BmEH+yGI88c1E0nAA==',key_name='tempest-TestNetworkAdvancedServerOps-1084441062',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-vfuyp447',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:14Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=c6851fde-7355-4735-8410-73aadae465f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.131 233728 DEBUG nova.network.os_vif_util [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.131 233728 DEBUG nova.network.os_vif_util [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.132 233728 DEBUG nova.objects.instance [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.157 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <uuid>c6851fde-7355-4735-8410-73aadae465f6</uuid>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <name>instance-0000009a</name>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1997328567</nova:name>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:30:19</nova:creationTime>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <nova:port uuid="3064c493-912e-4107-90c6-fd25cba7cf44">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <entry name="serial">c6851fde-7355-4735-8410-73aadae465f6</entry>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <entry name="uuid">c6851fde-7355-4735-8410-73aadae465f6</entry>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c6851fde-7355-4735-8410-73aadae465f6_disk">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c6851fde-7355-4735-8410-73aadae465f6_disk.config">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b2:66:0f"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <target dev="tap3064c493-91"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/console.log" append="off"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:30:20 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:30:20 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:30:20 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:30:20 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.158 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Preparing to wait for external event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.158 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.159 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.159 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.160 233728 DEBUG nova.virt.libvirt.vif [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1997328567',display_name='tempest-TestNetworkAdvancedServerOps-server-1997328567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1997328567',id=154,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbTSkSAgnmnPzBXf4q5v2aQuSv4Xg726RvmfA2bFIyqLw5l1ShpVhV1m+XMa6BqUnaJ6e6Oj4O/ixK0Z4BjO5LUyfviwcT1zO/PwUOUOsCHcpb46BmEH+yGI88c1E0nAA==',key_name='tempest-TestNetworkAdvancedServerOps-1084441062',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-vfuyp447',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:14Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=c6851fde-7355-4735-8410-73aadae465f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.161 233728 DEBUG nova.network.os_vif_util [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.161 233728 DEBUG nova.network.os_vif_util [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.162 233728 DEBUG os_vif [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.163 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.163 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.164 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.166 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.167 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3064c493-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.167 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3064c493-91, col_values=(('external_ids', {'iface-id': '3064c493-912e-4107-90c6-fd25cba7cf44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:66:0f', 'vm-uuid': 'c6851fde-7355-4735-8410-73aadae465f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:20 np0005539552 NetworkManager[48926]: <info>  [1764405020.1703] manager: (tap3064c493-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.171 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.175 233728 INFO os_vif [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91')#033[00m
Nov 29 03:30:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:20 np0005539552 podman[298995]: 2025-11-29 08:30:20.282800743 +0000 UTC m=+0.072923794 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:30:20 np0005539552 podman[298994]: 2025-11-29 08:30:20.29052026 +0000 UTC m=+0.080402954 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.294 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.295 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.295 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:b2:66:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.296 233728 INFO nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Using config drive#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.324 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:20 np0005539552 podman[298996]: 2025-11-29 08:30:20.354969054 +0000 UTC m=+0.118960222 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:30:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:20.636 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:20.636 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:20.637 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.788 233728 DEBUG nova.network.neutron [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updated VIF entry in instance network info cache for port 3064c493-912e-4107-90c6-fd25cba7cf44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.788 233728 DEBUG nova.network.neutron [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updating instance_info_cache with network_info: [{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.802 233728 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.921 233728 INFO nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Creating config drive at /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/disk.config#033[00m
Nov 29 03:30:20 np0005539552 nova_compute[233724]: 2025-11-29 08:30:20.933 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdehbwk02 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.084 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdehbwk02" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.128 233728 DEBUG nova.storage.rbd_utils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image c6851fde-7355-4735-8410-73aadae465f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.135 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/disk.config c6851fde-7355-4735-8410-73aadae465f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.332 233728 DEBUG oslo_concurrency.processutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/disk.config c6851fde-7355-4735-8410-73aadae465f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.334 233728 INFO nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Deleting local config drive /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/disk.config because it was imported into RBD.#033[00m
Nov 29 03:30:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:21 np0005539552 kernel: tap3064c493-91: entered promiscuous mode
Nov 29 03:30:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:21Z|00711|binding|INFO|Claiming lport 3064c493-912e-4107-90c6-fd25cba7cf44 for this chassis.
Nov 29 03:30:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:21Z|00712|binding|INFO|3064c493-912e-4107-90c6-fd25cba7cf44: Claiming fa:16:3e:b2:66:0f 10.100.0.13
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.394 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 NetworkManager[48926]: <info>  [1764405021.3958] manager: (tap3064c493-91): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.400 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.407 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:66:0f 10.100.0.13'], port_security=['fa:16:3e:b2:66:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c6851fde-7355-4735-8410-73aadae465f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ada5941-1c85-4f44-ade5-9cc90892652d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db73f64-426f-4e0b-98ae-aef18864fc6a, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3064c493-912e-4107-90c6-fd25cba7cf44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.408 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3064c493-912e-4107-90c6-fd25cba7cf44 in datapath b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 bound to our chassis#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.410 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.422 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2babbe-7f70-4080-b07d-146cf01c2485]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.423 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0a6bb86-d1 in ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:21 np0005539552 systemd-machined[196379]: New machine qemu-72-instance-0000009a.
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.424 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0a6bb86-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.424 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[96f2f332-b19e-493d-ad65-83cd00008eb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.425 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9bca0a46-975f-49d7-8c68-9a174708436e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.436 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[53ac9a95-6815-48b5-8763-407d0d14e81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.449 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1cfe6c-d09c-4adc-b479-ee77edf62c2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.466 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 systemd[1]: Started Virtual Machine qemu-72-instance-0000009a.
Nov 29 03:30:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:21Z|00713|binding|INFO|Setting lport 3064c493-912e-4107-90c6-fd25cba7cf44 ovn-installed in OVS
Nov 29 03:30:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:21Z|00714|binding|INFO|Setting lport 3064c493-912e-4107-90c6-fd25cba7cf44 up in Southbound
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.469 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.481 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[ae422a96-dac2-4050-a0fe-44afff8526eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.488 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2580c806-8844-49ce-b994-d8f80630a088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 NetworkManager[48926]: <info>  [1764405021.4893] manager: (tapb0a6bb86-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Nov 29 03:30:21 np0005539552 systemd-udevd[299132]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:21 np0005539552 systemd-udevd[299133]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:21 np0005539552 NetworkManager[48926]: <info>  [1764405021.5087] device (tap3064c493-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:21 np0005539552 NetworkManager[48926]: <info>  [1764405021.5099] device (tap3064c493-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.524 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec6748d-319e-4883-89ee-80edf0310c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.528 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e01a6769-0831-49ea-bccd-a88dec2f8a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 NetworkManager[48926]: <info>  [1764405021.5525] device (tapb0a6bb86-d0): carrier: link connected
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.558 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e95d3e9b-1425-4603-bf3e-86fdd36562de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.577 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7e1371-2037-463f-9e43-655467f84160]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0a6bb86-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:51:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806724, 'reachable_time': 22333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299160, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.594 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff1ed26-960b-47e8-9f22-cec94771aa62]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:5127'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806724, 'tstamp': 806724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299161, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.612 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0841e671-ffe7-4737-bf4c-0f94a99c524c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0a6bb86-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:51:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806724, 'reachable_time': 22333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299162, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.643 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e180924c-9f46-4278-93cc-a5c02d05834b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.668 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.707 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ab0a75-3756-4ff2-a61e-e4e1280a25e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.708 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0a6bb86-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.709 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.709 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0a6bb86-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.710 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 kernel: tapb0a6bb86-d0: entered promiscuous mode
Nov 29 03:30:21 np0005539552 NetworkManager[48926]: <info>  [1764405021.7118] manager: (tapb0a6bb86-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.713 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.714 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0a6bb86-d0, col_values=(('external_ids', {'iface-id': '1405bf80-edb0-434c-bb36-3b4fb078e261'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.715 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:21Z|00715|binding|INFO|Releasing lport 1405bf80-edb0-434c-bb36-3b4fb078e261 from this chassis (sb_readonly=0)
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.731 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.732 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.732 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fd06df41-d2f0-40c9-99e8-470eb538433a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.733 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.pid.haproxy
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:21.734 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'env', 'PROCESS_TAG=haproxy-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.811 233728 DEBUG nova.compute.manager [req-4aee3d45-d362-48d7-ac4a-db70a51484c3 req-68e88fc1-5fd4-49a6-bd26-64a483c5f8c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.812 233728 DEBUG oslo_concurrency.lockutils [req-4aee3d45-d362-48d7-ac4a-db70a51484c3 req-68e88fc1-5fd4-49a6-bd26-64a483c5f8c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.812 233728 DEBUG oslo_concurrency.lockutils [req-4aee3d45-d362-48d7-ac4a-db70a51484c3 req-68e88fc1-5fd4-49a6-bd26-64a483c5f8c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.813 233728 DEBUG oslo_concurrency.lockutils [req-4aee3d45-d362-48d7-ac4a-db70a51484c3 req-68e88fc1-5fd4-49a6-bd26-64a483c5f8c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:21 np0005539552 nova_compute[233724]: 2025-11-29 08:30:21.813 233728 DEBUG nova.compute.manager [req-4aee3d45-d362-48d7-ac4a-db70a51484c3 req-68e88fc1-5fd4-49a6-bd26-64a483c5f8c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Processing event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:22 np0005539552 podman[299229]: 2025-11-29 08:30:22.08237006 +0000 UTC m=+0.060365866 container create 3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:30:22 np0005539552 systemd[1]: Started libpod-conmon-3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60.scope.
Nov 29 03:30:22 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.139 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405022.1391308, c6851fde-7355-4735-8410-73aadae465f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.140 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:22 np0005539552 podman[299229]: 2025-11-29 08:30:22.047141152 +0000 UTC m=+0.025137038 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.142 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:22 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c23adb2504c609ed78f02505a3eb4ddb228d4ba38edcbfc2aa5ef794d017d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.150 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.154 233728 INFO nova.virt.libvirt.driver [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance spawned successfully.#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.154 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:22 np0005539552 podman[299229]: 2025-11-29 08:30:22.158977171 +0000 UTC m=+0.136973017 container init 3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.159 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.162 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:22 np0005539552 podman[299229]: 2025-11-29 08:30:22.164080928 +0000 UTC m=+0.142076734 container start 3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.172 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.172 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.173 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.173 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.174 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.174 233728 DEBUG nova.virt.libvirt.driver [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:22 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [NOTICE]   (299257) : New worker (299259) forked
Nov 29 03:30:22 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [NOTICE]   (299257) : Loading success.
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.197 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.197 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405022.1411183, c6851fde-7355-4735-8410-73aadae465f6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.197 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.224 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.226 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405022.144545, c6851fde-7355-4735-8410-73aadae465f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.227 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.257 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.260 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.282 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.287 233728 INFO nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Took 7.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.287 233728 DEBUG nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.347 233728 INFO nova.compute.manager [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Took 8.84 seconds to build instance.#033[00m
Nov 29 03:30:22 np0005539552 nova_compute[233724]: 2025-11-29 08:30:22.363 233728 DEBUG oslo_concurrency.lockutils [None req-97e0fb13-9e17-4e7c-a72b-f52663549a76 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:23.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.611425) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023611476, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 309, "num_deletes": 251, "total_data_size": 129072, "memory_usage": 135088, "flush_reason": "Manual Compaction"}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023613534, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 83952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56244, "largest_seqno": 56548, "table_properties": {"data_size": 82022, "index_size": 158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5667, "raw_average_key_size": 20, "raw_value_size": 78098, "raw_average_value_size": 279, "num_data_blocks": 7, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405019, "oldest_key_time": 1764405019, "file_creation_time": 1764405023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 2126 microseconds, and 788 cpu microseconds.
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.613562) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 83952 bytes OK
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.613573) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.615246) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.615257) EVENT_LOG_v1 {"time_micros": 1764405023615254, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.615271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 126838, prev total WAL file size 126838, number of live WAL files 2.
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.615734) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(81KB)], [108(14MB)]
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023615787, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 15680347, "oldest_snapshot_seqno": -1}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8755 keys, 11837834 bytes, temperature: kUnknown
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023728658, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11837834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11781138, "index_size": 33736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 228159, "raw_average_key_size": 26, "raw_value_size": 11626832, "raw_average_value_size": 1328, "num_data_blocks": 1314, "num_entries": 8755, "num_filter_entries": 8755, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.728910) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11837834 bytes
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.730765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.8 rd, 104.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.9 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(327.8) write-amplify(141.0) OK, records in: 9265, records dropped: 510 output_compression: NoCompression
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.730793) EVENT_LOG_v1 {"time_micros": 1764405023730781, "job": 68, "event": "compaction_finished", "compaction_time_micros": 112932, "compaction_time_cpu_micros": 51577, "output_level": 6, "num_output_files": 1, "total_output_size": 11837834, "num_input_records": 9265, "num_output_records": 8755, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023730964, "job": 68, "event": "table_file_deletion", "file_number": 110}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023741550, "job": 68, "event": "table_file_deletion", "file_number": 108}
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.615605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.741766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.741775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.741778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.741781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:30:23.741784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539552 nova_compute[233724]: 2025-11-29 08:30:23.890 233728 DEBUG nova.compute.manager [req-1df9ba6f-6ae0-4044-83dd-da96dafdec31 req-70648358-4db2-4b06-b368-4c3c54374650 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:23 np0005539552 nova_compute[233724]: 2025-11-29 08:30:23.891 233728 DEBUG oslo_concurrency.lockutils [req-1df9ba6f-6ae0-4044-83dd-da96dafdec31 req-70648358-4db2-4b06-b368-4c3c54374650 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:23 np0005539552 nova_compute[233724]: 2025-11-29 08:30:23.891 233728 DEBUG oslo_concurrency.lockutils [req-1df9ba6f-6ae0-4044-83dd-da96dafdec31 req-70648358-4db2-4b06-b368-4c3c54374650 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:23 np0005539552 nova_compute[233724]: 2025-11-29 08:30:23.892 233728 DEBUG oslo_concurrency.lockutils [req-1df9ba6f-6ae0-4044-83dd-da96dafdec31 req-70648358-4db2-4b06-b368-4c3c54374650 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:23 np0005539552 nova_compute[233724]: 2025-11-29 08:30:23.892 233728 DEBUG nova.compute.manager [req-1df9ba6f-6ae0-4044-83dd-da96dafdec31 req-70648358-4db2-4b06-b368-4c3c54374650 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:23 np0005539552 nova_compute[233724]: 2025-11-29 08:30:23.893 233728 WARNING nova.compute.manager [req-1df9ba6f-6ae0-4044-83dd-da96dafdec31 req-70648358-4db2-4b06-b368-4c3c54374650 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received unexpected event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:25 np0005539552 nova_compute[233724]: 2025-11-29 08:30:25.168 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:26.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539552 nova_compute[233724]: 2025-11-29 08:30:26.672 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:26 np0005539552 nova_compute[233724]: 2025-11-29 08:30:26.864 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:26 np0005539552 NetworkManager[48926]: <info>  [1764405026.8662] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Nov 29 03:30:26 np0005539552 NetworkManager[48926]: <info>  [1764405026.8683] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Nov 29 03:30:26 np0005539552 nova_compute[233724]: 2025-11-29 08:30:26.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:26Z|00716|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:30:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:26Z|00717|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:30:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:26Z|00718|binding|INFO|Releasing lport 1405bf80-edb0-434c-bb36-3b4fb078e261 from this chassis (sb_readonly=0)
Nov 29 03:30:27 np0005539552 nova_compute[233724]: 2025-11-29 08:30:27.004 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:28.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:28 np0005539552 nova_compute[233724]: 2025-11-29 08:30:28.287 233728 DEBUG nova.compute.manager [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-changed-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:28 np0005539552 nova_compute[233724]: 2025-11-29 08:30:28.287 233728 DEBUG nova.compute.manager [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Refreshing instance network info cache due to event network-changed-3064c493-912e-4107-90c6-fd25cba7cf44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:28 np0005539552 nova_compute[233724]: 2025-11-29 08:30:28.287 233728 DEBUG oslo_concurrency.lockutils [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:28 np0005539552 nova_compute[233724]: 2025-11-29 08:30:28.288 233728 DEBUG oslo_concurrency.lockutils [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:28 np0005539552 nova_compute[233724]: 2025-11-29 08:30:28.288 233728 DEBUG nova.network.neutron [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Refreshing network info cache for port 3064c493-912e-4107-90c6-fd25cba7cf44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:29.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:30 np0005539552 nova_compute[233724]: 2025-11-29 08:30:30.170 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:30.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:31 np0005539552 nova_compute[233724]: 2025-11-29 08:30:31.146 233728 DEBUG nova.network.neutron [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updated VIF entry in instance network info cache for port 3064c493-912e-4107-90c6-fd25cba7cf44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:31 np0005539552 nova_compute[233724]: 2025-11-29 08:30:31.146 233728 DEBUG nova.network.neutron [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updating instance_info_cache with network_info: [{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:31 np0005539552 nova_compute[233724]: 2025-11-29 08:30:31.170 233728 DEBUG oslo_concurrency.lockutils [req-4e1fc79d-8612-4751-bb0a-8984c2cb9fb9 req-e6041766-5c6f-4a1a-be54-8a246ed16bd8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:31.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:31 np0005539552 nova_compute[233724]: 2025-11-29 08:30:31.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:32.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 03:30:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:33.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.994 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.994 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.995 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.995 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.995 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.996 233728 INFO nova.compute.manager [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Terminating instance#033[00m
Nov 29 03:30:33 np0005539552 nova_compute[233724]: 2025-11-29 08:30:33.997 233728 DEBUG nova.compute.manager [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:34 np0005539552 kernel: tapa8931757-80 (unregistering): left promiscuous mode
Nov 29 03:30:34 np0005539552 NetworkManager[48926]: <info>  [1764405034.0579] device (tapa8931757-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.076 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:34Z|00719|binding|INFO|Releasing lport a8931757-80f9-4e61-80ae-e6d2f1fc0dde from this chassis (sb_readonly=0)
Nov 29 03:30:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:34Z|00720|binding|INFO|Setting lport a8931757-80f9-4e61-80ae-e6d2f1fc0dde down in Southbound
Nov 29 03:30:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:34Z|00721|binding|INFO|Removing iface tapa8931757-80 ovn-installed in OVS
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.077 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.086 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:b1:70 10.100.0.12'], port_security=['fa:16:3e:41:b1:70 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8ba722d8-f0b0-426b-a972-888ebce61a32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a8931757-80f9-4e61-80ae-e6d2f1fc0dde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.087 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a8931757-80f9-4e61-80ae-e6d2f1fc0dde in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.089 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.090 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd5b5d4-a74a-4562-aa20-3092554a6c5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.090 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace which is not needed anymore#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.111 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 29 03:30:34 np0005539552 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000094.scope: Consumed 16.190s CPU time.
Nov 29 03:30:34 np0005539552 systemd-machined[196379]: Machine qemu-70-instance-00000094 terminated.
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.236 233728 INFO nova.virt.libvirt.driver [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Instance destroyed successfully.#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.237 233728 DEBUG nova.objects.instance [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'resources' on Instance uuid 8ba722d8-f0b0-426b-a972-888ebce61a32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.262 233728 DEBUG nova.virt.libvirt.vif [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1137448026',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1137448026',id=148,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-brhr6vp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:32Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=8ba722d8-f0b0-426b-a972-888ebce61a32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.263 233728 DEBUG nova.network.os_vif_util [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "address": "fa:16:3e:41:b1:70", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8931757-80", "ovs_interfaceid": "a8931757-80f9-4e61-80ae-e6d2f1fc0dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.264 233728 DEBUG nova.network.os_vif_util [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.266 233728 DEBUG os_vif [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:34 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [NOTICE]   (297288) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:34 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [NOTICE]   (297288) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:34 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [WARNING]  (297288) : Exiting Master process...
Nov 29 03:30:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:34.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.268 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.269 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8931757-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:34 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [ALERT]    (297288) : Current worker (297290) exited with code 143 (Terminated)
Nov 29 03:30:34 np0005539552 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[297278]: [WARNING]  (297288) : All workers exited. Exiting... (0)
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.271 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.273 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:34 np0005539552 systemd[1]: libpod-389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c.scope: Deactivated successfully.
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.275 233728 INFO os_vif [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:b1:70,bridge_name='br-int',has_traffic_filtering=True,id=a8931757-80f9-4e61-80ae-e6d2f1fc0dde,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8931757-80')#033[00m
Nov 29 03:30:34 np0005539552 podman[299350]: 2025-11-29 08:30:34.279296366 +0000 UTC m=+0.062577045 container died 389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:34 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:34 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d6b81d924149ed42138083015f2e5c9ced140fc4f08b81b8acc1d1cf03cf296d-merged.mount: Deactivated successfully.
Nov 29 03:30:34 np0005539552 podman[299350]: 2025-11-29 08:30:34.327937155 +0000 UTC m=+0.111217824 container cleanup 389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:34 np0005539552 systemd[1]: libpod-conmon-389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c.scope: Deactivated successfully.
Nov 29 03:30:34 np0005539552 podman[299410]: 2025-11-29 08:30:34.392351058 +0000 UTC m=+0.039570716 container remove 389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.399 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4cf033-3a89-49dc-afe4-cf11452bb90c]: (4, ('Sat Nov 29 08:30:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c)\n389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c\nSat Nov 29 08:30:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c)\n389577fa042971da2450401115112a6eea6945fa55ea4d562c413161afaeef0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.400 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e6feeb9d-0c66-4e35-8318-817dbb9a7ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.402 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.404 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 kernel: tapd9d41f0a-10: left promiscuous mode
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.420 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.424 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3245eab6-c04d-4272-83f3-48b383e1d145]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.436 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[13ef4f2d-3286-4752-8d86-d79b72e7a2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.437 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[60cefb4a-1d49-43a2-a6d0-b9fb45d068f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.458 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[22387d79-1d19-4a52-a52c-54a84df1625e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800880, 'reachable_time': 33195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299425, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.462 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:34.462 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6a4426-faf8-47da-8278-c3eb89796103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:34 np0005539552 systemd[1]: run-netns-ovnmeta\x2dd9d41f0a\x2d17f9\x2d4df4\x2da453\x2d04da996d63b6.mount: Deactivated successfully.
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.474 233728 DEBUG nova.compute.manager [req-65337dc7-3091-46da-b0ce-89b903f6f600 req-309ec119-aba5-464d-a090-e14d77c2cba7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-vif-unplugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.475 233728 DEBUG oslo_concurrency.lockutils [req-65337dc7-3091-46da-b0ce-89b903f6f600 req-309ec119-aba5-464d-a090-e14d77c2cba7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.475 233728 DEBUG oslo_concurrency.lockutils [req-65337dc7-3091-46da-b0ce-89b903f6f600 req-309ec119-aba5-464d-a090-e14d77c2cba7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.475 233728 DEBUG oslo_concurrency.lockutils [req-65337dc7-3091-46da-b0ce-89b903f6f600 req-309ec119-aba5-464d-a090-e14d77c2cba7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.475 233728 DEBUG nova.compute.manager [req-65337dc7-3091-46da-b0ce-89b903f6f600 req-309ec119-aba5-464d-a090-e14d77c2cba7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] No waiting events found dispatching network-vif-unplugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.476 233728 DEBUG nova.compute.manager [req-65337dc7-3091-46da-b0ce-89b903f6f600 req-309ec119-aba5-464d-a090-e14d77c2cba7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-vif-unplugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.775 233728 INFO nova.virt.libvirt.driver [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Deleting instance files /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32_del#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.776 233728 INFO nova.virt.libvirt.driver [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Deletion of /var/lib/nova/instances/8ba722d8-f0b0-426b-a972-888ebce61a32_del complete#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.841 233728 INFO nova.compute.manager [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.842 233728 DEBUG oslo.service.loopingcall [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.843 233728 DEBUG nova.compute.manager [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:34 np0005539552 nova_compute[233724]: 2025-11-29 08:30:34.843 233728 DEBUG nova.network.neutron [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:35Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:66:0f 10.100.0.13
Nov 29 03:30:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:35Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:66:0f 10.100.0.13
Nov 29 03:30:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:35.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:35 np0005539552 nova_compute[233724]: 2025-11-29 08:30:35.483 233728 DEBUG nova.network.neutron [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:35 np0005539552 nova_compute[233724]: 2025-11-29 08:30:35.507 233728 INFO nova.compute.manager [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Took 0.66 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:35 np0005539552 nova_compute[233724]: 2025-11-29 08:30:35.608 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:35 np0005539552 nova_compute[233724]: 2025-11-29 08:30:35.608 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.008 233728 DEBUG oslo_concurrency.processutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:36.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2268465232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.463 233728 DEBUG oslo_concurrency.processutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.469 233728 DEBUG nova.compute.provider_tree [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.492 233728 DEBUG nova.scheduler.client.report [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.516 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.544 233728 INFO nova.scheduler.client.report [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Deleted allocations for instance 8ba722d8-f0b0-426b-a972-888ebce61a32#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.565 233728 DEBUG nova.compute.manager [req-60b95b68-bdee-46f3-85e7-a23b99c660de req-a01861b1-0c52-486c-85f2-a0a74354eed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.565 233728 DEBUG oslo_concurrency.lockutils [req-60b95b68-bdee-46f3-85e7-a23b99c660de req-a01861b1-0c52-486c-85f2-a0a74354eed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.566 233728 DEBUG oslo_concurrency.lockutils [req-60b95b68-bdee-46f3-85e7-a23b99c660de req-a01861b1-0c52-486c-85f2-a0a74354eed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.566 233728 DEBUG oslo_concurrency.lockutils [req-60b95b68-bdee-46f3-85e7-a23b99c660de req-a01861b1-0c52-486c-85f2-a0a74354eed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.566 233728 DEBUG nova.compute.manager [req-60b95b68-bdee-46f3-85e7-a23b99c660de req-a01861b1-0c52-486c-85f2-a0a74354eed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] No waiting events found dispatching network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.566 233728 WARNING nova.compute.manager [req-60b95b68-bdee-46f3-85e7-a23b99c660de req-a01861b1-0c52-486c-85f2-a0a74354eed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received unexpected event network-vif-plugged-a8931757-80f9-4e61-80ae-e6d2f1fc0dde for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.662 233728 DEBUG oslo_concurrency.lockutils [None req-8e62979e-b98d-46e0-9638-f15233ba3e32 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "8ba722d8-f0b0-426b-a972-888ebce61a32" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.676 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.782 233728 DEBUG nova.compute.manager [req-595c7195-e39f-4d76-b81d-610836d98c02 req-0e25f409-efed-410f-b808-24d1b1656bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Received event network-vif-deleted-a8931757-80f9-4e61-80ae-e6d2f1fc0dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.965 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.965 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.966 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:30:36 np0005539552 nova_compute[233724]: 2025-11-29 08:30:36.967 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:37.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4212506690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.439 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.533 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.533 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.537 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.537 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.726 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.727 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3886MB free_disk=20.77368927001953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.727 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.727 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 101e7b80-d529-4f2a-87df-44512ead5b00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance c6851fde-7355-4735-8410-73aadae465f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:30:37 np0005539552 nova_compute[233724]: 2025-11-29 08:30:37.888 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:38.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4107061576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:38 np0005539552 nova_compute[233724]: 2025-11-29 08:30:38.330 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:38 np0005539552 nova_compute[233724]: 2025-11-29 08:30:38.335 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:38 np0005539552 nova_compute[233724]: 2025-11-29 08:30:38.348 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:38 np0005539552 nova_compute[233724]: 2025-11-29 08:30:38.377 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:30:38 np0005539552 nova_compute[233724]: 2025-11-29 08:30:38.377 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:39 np0005539552 nova_compute[233724]: 2025-11-29 08:30:39.273 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:39.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.518 233728 INFO nova.compute.manager [None req-e7254797-77d6-4e9a-bb58-049ded0ebc72 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Get console output#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.524 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.759 233728 DEBUG oslo_concurrency.lockutils [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.760 233728 DEBUG oslo_concurrency.lockutils [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.760 233728 DEBUG nova.compute.manager [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.768 233728 DEBUG nova.compute.manager [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.770 233728 DEBUG nova.objects.instance [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'flavor' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:40 np0005539552 nova_compute[233724]: 2025-11-29 08:30:40.800 233728 DEBUG nova.virt.libvirt.driver [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:30:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:41.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:41 np0005539552 nova_compute[233724]: 2025-11-29 08:30:41.679 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 03:30:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.362 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.362 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.362 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.362 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.363 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:42 np0005539552 nova_compute[233724]: 2025-11-29 08:30:42.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:43 np0005539552 nova_compute[233724]: 2025-11-29 08:30:43.821 233728 INFO nova.virt.libvirt.driver [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:30:44 np0005539552 kernel: tap3064c493-91 (unregistering): left promiscuous mode
Nov 29 03:30:44 np0005539552 NetworkManager[48926]: <info>  [1764405044.2394] device (tap3064c493-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.245 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:44Z|00722|binding|INFO|Releasing lport 3064c493-912e-4107-90c6-fd25cba7cf44 from this chassis (sb_readonly=0)
Nov 29 03:30:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:44Z|00723|binding|INFO|Setting lport 3064c493-912e-4107-90c6-fd25cba7cf44 down in Southbound
Nov 29 03:30:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:44Z|00724|binding|INFO|Removing iface tap3064c493-91 ovn-installed in OVS
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.248 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.254 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:66:0f 10.100.0.13'], port_security=['fa:16:3e:b2:66:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c6851fde-7355-4735-8410-73aadae465f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ada5941-1c85-4f44-ade5-9cc90892652d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db73f64-426f-4e0b-98ae-aef18864fc6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3064c493-912e-4107-90c6-fd25cba7cf44) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.255 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3064c493-912e-4107-90c6-fd25cba7cf44 in datapath b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 unbound from our chassis#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.256 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.257 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b51d86f0-3d54-4ff1-9034-90f6cd9af3da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.258 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 namespace which is not needed anymore#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.274 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:44 np0005539552 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Nov 29 03:30:44 np0005539552 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009a.scope: Consumed 14.525s CPU time.
Nov 29 03:30:44 np0005539552 systemd-machined[196379]: Machine qemu-72-instance-0000009a terminated.
Nov 29 03:30:44 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [NOTICE]   (299257) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:44 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [NOTICE]   (299257) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:44 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [WARNING]  (299257) : Exiting Master process...
Nov 29 03:30:44 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [WARNING]  (299257) : Exiting Master process...
Nov 29 03:30:44 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [ALERT]    (299257) : Current worker (299259) exited with code 143 (Terminated)
Nov 29 03:30:44 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299253]: [WARNING]  (299257) : All workers exited. Exiting... (0)
Nov 29 03:30:44 np0005539552 systemd[1]: libpod-3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60.scope: Deactivated successfully.
Nov 29 03:30:44 np0005539552 podman[299574]: 2025-11-29 08:30:44.405279842 +0000 UTC m=+0.047233032 container died 3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay-d5c23adb2504c609ed78f02505a3eb4ddb228d4ba38edcbfc2aa5ef794d017d8-merged.mount: Deactivated successfully.
Nov 29 03:30:44 np0005539552 podman[299574]: 2025-11-29 08:30:44.441112316 +0000 UTC m=+0.083065506 container cleanup 3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:30:44 np0005539552 systemd[1]: libpod-conmon-3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60.scope: Deactivated successfully.
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.454 233728 INFO nova.virt.libvirt.driver [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance destroyed successfully.#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.455 233728 DEBUG nova.objects.instance [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'numa_topology' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.468 233728 DEBUG nova.compute.manager [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.513 233728 DEBUG oslo_concurrency.lockutils [None req-508ccca5-23d4-49ab-9fa9-d9f7f8c569e2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:44 np0005539552 podman[299611]: 2025-11-29 08:30:44.519112084 +0000 UTC m=+0.053007967 container remove 3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.528 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8ac141-2620-4528-a8fa-2f5c48eedfb2]: (4, ('Sat Nov 29 08:30:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 (3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60)\n3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60\nSat Nov 29 08:30:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 (3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60)\n3154f5431dc77ea06d43de69e5dc9b7632157eabfc3a6befbd7995e21fcd9c60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.530 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7f6f79-1982-42f7-99ec-a773285b869d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.531 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0a6bb86-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.532 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539552 kernel: tapb0a6bb86-d0: left promiscuous mode
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.559 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6555d7-ab59-4690-bc09-4f851f6cdea4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.562 233728 DEBUG nova.compute.manager [req-7e52bd0d-e79e-46df-99c7-1f9cae387e42 req-b7b96097-4af4-4ef4-adfe-2c795117460a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-unplugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.563 233728 DEBUG oslo_concurrency.lockutils [req-7e52bd0d-e79e-46df-99c7-1f9cae387e42 req-b7b96097-4af4-4ef4-adfe-2c795117460a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.563 233728 DEBUG oslo_concurrency.lockutils [req-7e52bd0d-e79e-46df-99c7-1f9cae387e42 req-b7b96097-4af4-4ef4-adfe-2c795117460a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.563 233728 DEBUG oslo_concurrency.lockutils [req-7e52bd0d-e79e-46df-99c7-1f9cae387e42 req-b7b96097-4af4-4ef4-adfe-2c795117460a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.564 233728 DEBUG nova.compute.manager [req-7e52bd0d-e79e-46df-99c7-1f9cae387e42 req-b7b96097-4af4-4ef4-adfe-2c795117460a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-unplugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:44 np0005539552 nova_compute[233724]: 2025-11-29 08:30:44.564 233728 WARNING nova.compute.manager [req-7e52bd0d-e79e-46df-99c7-1f9cae387e42 req-b7b96097-4af4-4ef4-adfe-2c795117460a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received unexpected event network-vif-unplugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.578 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9959cda9-2f4e-487c-9f71-69226136c119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.579 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1cf28f-3a5a-4a22-9cc3-354039fa51d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.596 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[94942a5c-d2af-41ba-8de3-f587cf1ed853]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806716, 'reachable_time': 15206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299634, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.599 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:44.599 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[83763dca-f0c3-4e02-a443-e1d9efcaf7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:44 np0005539552 systemd[1]: run-netns-ovnmeta\x2db0a6bb86\x2dd4c8\x2d4c3a\x2d8e0d\x2d7fe39083fb62.mount: Deactivated successfully.
Nov 29 03:30:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:46.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.681 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.759 233728 DEBUG nova.compute.manager [req-bcdbe9a4-ee74-42c0-9260-c22ee95ba463 req-e8520f8f-4495-4205-aa9a-733e9d2a4c37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.760 233728 DEBUG oslo_concurrency.lockutils [req-bcdbe9a4-ee74-42c0-9260-c22ee95ba463 req-e8520f8f-4495-4205-aa9a-733e9d2a4c37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.760 233728 DEBUG oslo_concurrency.lockutils [req-bcdbe9a4-ee74-42c0-9260-c22ee95ba463 req-e8520f8f-4495-4205-aa9a-733e9d2a4c37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.760 233728 DEBUG oslo_concurrency.lockutils [req-bcdbe9a4-ee74-42c0-9260-c22ee95ba463 req-e8520f8f-4495-4205-aa9a-733e9d2a4c37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.761 233728 DEBUG nova.compute.manager [req-bcdbe9a4-ee74-42c0-9260-c22ee95ba463 req-e8520f8f-4495-4205-aa9a-733e9d2a4c37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:46 np0005539552 nova_compute[233724]: 2025-11-29 08:30:46.761 233728 WARNING nova.compute.manager [req-bcdbe9a4-ee74-42c0-9260-c22ee95ba463 req-e8520f8f-4495-4205-aa9a-733e9d2a4c37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received unexpected event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.288 233728 INFO nova.compute.manager [None req-a10221aa-6a97-47a7-96d5-cae0c837d794 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Get console output#033[00m
Nov 29 03:30:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:47.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.457 233728 DEBUG nova.objects.instance [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'flavor' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.493 233728 DEBUG oslo_concurrency.lockutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.493 233728 DEBUG oslo_concurrency.lockutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.494 233728 DEBUG nova.network.neutron [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.494 233728 DEBUG nova.objects.instance [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'info_cache' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.922 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:30:47 np0005539552 nova_compute[233724]: 2025-11-29 08:30:47.943 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:30:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:48.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.202 233728 DEBUG nova.network.neutron [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updating instance_info_cache with network_info: [{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.226 233728 DEBUG oslo_concurrency.lockutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.234 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405034.232885, 8ba722d8-f0b0-426b-a972-888ebce61a32 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.234 233728 INFO nova.compute.manager [-] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.252 233728 DEBUG nova.compute.manager [None req-9ca0d146-cb4a-4319-9932-df10c3c53795 - - - - - -] [instance: 8ba722d8-f0b0-426b-a972-888ebce61a32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.256 233728 INFO nova.virt.libvirt.driver [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance destroyed successfully.#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.256 233728 DEBUG nova.objects.instance [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'numa_topology' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.271 233728 DEBUG nova.objects.instance [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.278 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.281 233728 DEBUG nova.virt.libvirt.vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1997328567',display_name='tempest-TestNetworkAdvancedServerOps-server-1997328567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1997328567',id=154,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbTSkSAgnmnPzBXf4q5v2aQuSv4Xg726RvmfA2bFIyqLw5l1ShpVhV1m+XMa6BqUnaJ6e6Oj4O/ixK0Z4BjO5LUyfviwcT1zO/PwUOUOsCHcpb46BmEH+yGI88c1E0nAA==',key_name='tempest-TestNetworkAdvancedServerOps-1084441062',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-vfuyp447',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:44Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=c6851fde-7355-4735-8410-73aadae465f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.281 233728 DEBUG nova.network.os_vif_util [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.282 233728 DEBUG nova.network.os_vif_util [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.283 233728 DEBUG os_vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.284 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.284 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3064c493-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.285 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.287 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.288 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.290 233728 INFO os_vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91')#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.302 233728 DEBUG nova.virt.libvirt.driver [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Start _get_guest_xml network_info=[{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.307 233728 WARNING nova.virt.libvirt.driver [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.316 233728 DEBUG nova.virt.libvirt.host [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.318 233728 DEBUG nova.virt.libvirt.host [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.322 233728 DEBUG nova.virt.libvirt.host [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.323 233728 DEBUG nova.virt.libvirt.host [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.325 233728 DEBUG nova.virt.libvirt.driver [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.325 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.326 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.326 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.327 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.327 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.328 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.328 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.329 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.329 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.330 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.330 233728 DEBUG nova.virt.hardware [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.331 233728 DEBUG nova.objects.instance [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:49 np0005539552 nova_compute[233724]: 2025-11-29 08:30:49.355 233728 DEBUG oslo_concurrency.processutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4112878891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:50 np0005539552 nova_compute[233724]: 2025-11-29 08:30:50.362 233728 DEBUG oslo_concurrency.processutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:50 np0005539552 podman[299678]: 2025-11-29 08:30:50.973098322 +0000 UTC m=+0.055975057 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:30:50 np0005539552 podman[299677]: 2025-11-29 08:30:50.975462715 +0000 UTC m=+0.064981079 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:30:51 np0005539552 podman[299679]: 2025-11-29 08:30:51.013595521 +0000 UTC m=+0.094812492 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:30:51 np0005539552 nova_compute[233724]: 2025-11-29 08:30:51.076 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:51 np0005539552 nova_compute[233724]: 2025-11-29 08:30:51.085 233728 DEBUG oslo_concurrency.processutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:51.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1228111850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:51 np0005539552 nova_compute[233724]: 2025-11-29 08:30:51.690 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:51Z|00725|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:30:51 np0005539552 nova_compute[233724]: 2025-11-29 08:30:51.991 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.069 233728 DEBUG oslo_concurrency.processutils [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.983s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.070 233728 DEBUG nova.virt.libvirt.vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1997328567',display_name='tempest-TestNetworkAdvancedServerOps-server-1997328567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1997328567',id=154,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbTSkSAgnmnPzBXf4q5v2aQuSv4Xg726RvmfA2bFIyqLw5l1ShpVhV1m+XMa6BqUnaJ6e6Oj4O/ixK0Z4BjO5LUyfviwcT1zO/PwUOUOsCHcpb46BmEH+yGI88c1E0nAA==',key_name='tempest-TestNetworkAdvancedServerOps-1084441062',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-vfuyp447',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:44Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=c6851fde-7355-4735-8410-73aadae465f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.070 233728 DEBUG nova.network.os_vif_util [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.071 233728 DEBUG nova.network.os_vif_util [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.072 233728 DEBUG nova.objects.instance [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.085 233728 DEBUG nova.virt.libvirt.driver [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <uuid>c6851fde-7355-4735-8410-73aadae465f6</uuid>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <name>instance-0000009a</name>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1997328567</nova:name>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:30:49</nova:creationTime>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <nova:port uuid="3064c493-912e-4107-90c6-fd25cba7cf44">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <entry name="serial">c6851fde-7355-4735-8410-73aadae465f6</entry>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <entry name="uuid">c6851fde-7355-4735-8410-73aadae465f6</entry>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c6851fde-7355-4735-8410-73aadae465f6_disk">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/c6851fde-7355-4735-8410-73aadae465f6_disk.config">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:b2:66:0f"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <target dev="tap3064c493-91"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6/console.log" append="off"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:30:52 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:30:52 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:30:52 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:30:52 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.087 233728 DEBUG nova.virt.libvirt.driver [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.088 233728 DEBUG nova.virt.libvirt.driver [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.088 233728 DEBUG nova.virt.libvirt.vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1997328567',display_name='tempest-TestNetworkAdvancedServerOps-server-1997328567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1997328567',id=154,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbTSkSAgnmnPzBXf4q5v2aQuSv4Xg726RvmfA2bFIyqLw5l1ShpVhV1m+XMa6BqUnaJ6e6Oj4O/ixK0Z4BjO5LUyfviwcT1zO/PwUOUOsCHcpb46BmEH+yGI88c1E0nAA==',key_name='tempest-TestNetworkAdvancedServerOps-1084441062',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-vfuyp447',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:44Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=c6851fde-7355-4735-8410-73aadae465f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.089 233728 DEBUG nova.network.os_vif_util [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.089 233728 DEBUG nova.network.os_vif_util [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.090 233728 DEBUG os_vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.091 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.091 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.091 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.094 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.094 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3064c493-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.095 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3064c493-91, col_values=(('external_ids', {'iface-id': '3064c493-912e-4107-90c6-fd25cba7cf44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:66:0f', 'vm-uuid': 'c6851fde-7355-4735-8410-73aadae465f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.0983] manager: (tap3064c493-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.100 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.203 233728 INFO os_vif [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91')#033[00m
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00726|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.216 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 kernel: tap3064c493-91: entered promiscuous mode
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00727|binding|INFO|Claiming lport 3064c493-912e-4107-90c6-fd25cba7cf44 for this chassis.
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00728|binding|INFO|3064c493-912e-4107-90c6-fd25cba7cf44: Claiming fa:16:3e:b2:66:0f 10.100.0.13
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.280 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.286 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.2875] manager: (tap3064c493-91): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Nov 29 03:30:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:52.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.2905] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.2911] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.289 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.295 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:66:0f 10.100.0.13'], port_security=['fa:16:3e:b2:66:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c6851fde-7355-4735-8410-73aadae465f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9ada5941-1c85-4f44-ade5-9cc90892652d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db73f64-426f-4e0b-98ae-aef18864fc6a, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3064c493-912e-4107-90c6-fd25cba7cf44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.296 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3064c493-912e-4107-90c6-fd25cba7cf44 in datapath b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 bound to our chassis#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.297 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.310 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c31723ef-9337-4cb9-a326-5b4a67bf6707]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.311 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0a6bb86-d1 in ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.313 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0a6bb86-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.313 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eaed74a4-4f8a-4f1b-ae7e-4159732312a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.314 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0356f277-a63f-44dc-b1ea-4351df8d0735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 systemd-udevd[299774]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.326 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0eda8b-c87d-40f9-bf8c-16da409e7d31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.3321] device (tap3064c493-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.3330] device (tap3064c493-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:52 np0005539552 systemd-machined[196379]: New machine qemu-73-instance-0000009a.
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.352 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aff9496f-1672-4f7d-ba4d-1905f0c976da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 systemd[1]: Started Virtual Machine qemu-73-instance-0000009a.
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.390 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[21c279db-71d2-41b5-991f-987a9c54e627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.3986] manager: (tapb0a6bb86-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.397 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3ef6be-3566-42dc-836c-4bab4fe4b9b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 systemd-udevd[299779]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00729|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.432 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[877a00bf-273c-4001-a527-de009cf0c78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.437 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1c60b0-2c03-406f-8ee1-2131dd5c6320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.441 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.4608] device (tapb0a6bb86-d0): carrier: link connected
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00730|binding|INFO|Setting lport 3064c493-912e-4107-90c6-fd25cba7cf44 ovn-installed in OVS
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00731|binding|INFO|Setting lport 3064c493-912e-4107-90c6-fd25cba7cf44 up in Southbound
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.466 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[518a3c66-0dca-4b77-9461-51aedb148bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.467 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.483 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a67cf019-c493-4608-9a3c-7080e498e1e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0a6bb86-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:51:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809814, 'reachable_time': 29013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299808, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.500 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[500ec6d3-1e8b-400a-84a3-174ee91de087]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:5127'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809814, 'tstamp': 809814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299809, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.523 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[109f71f9-085e-4e14-a5f9-20cfbcba5e5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0a6bb86-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:51:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809814, 'reachable_time': 29013, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299810, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.563 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a9585b6b-4102-469f-8834-f68c522109bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.643 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad67ddf-66b8-434c-ac62-620cd004cfc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.645 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0a6bb86-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.646 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.647 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0a6bb86-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.649 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 NetworkManager[48926]: <info>  [1764405052.6507] manager: (tapb0a6bb86-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Nov 29 03:30:52 np0005539552 kernel: tapb0a6bb86-d0: entered promiscuous mode
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.657 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0a6bb86-d0, col_values=(('external_ids', {'iface-id': '1405bf80-edb0-434c-bb36-3b4fb078e261'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.658 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:30:52Z|00732|binding|INFO|Releasing lport 1405bf80-edb0-434c-bb36-3b4fb078e261 from this chassis (sb_readonly=0)
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.691 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.694 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.695 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[622fc883-8522-4a27-97a5-726278e3a06f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.696 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.pid.haproxy
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:30:52.697 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'env', 'PROCESS_TAG=haproxy-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.856 233728 DEBUG nova.compute.manager [req-7189dba6-4ef3-4efb-a912-aa356a8aafc2 req-f18fda60-2ab2-49bc-b38d-8f809b79f0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.857 233728 DEBUG oslo_concurrency.lockutils [req-7189dba6-4ef3-4efb-a912-aa356a8aafc2 req-f18fda60-2ab2-49bc-b38d-8f809b79f0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.857 233728 DEBUG oslo_concurrency.lockutils [req-7189dba6-4ef3-4efb-a912-aa356a8aafc2 req-f18fda60-2ab2-49bc-b38d-8f809b79f0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.858 233728 DEBUG oslo_concurrency.lockutils [req-7189dba6-4ef3-4efb-a912-aa356a8aafc2 req-f18fda60-2ab2-49bc-b38d-8f809b79f0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.858 233728 DEBUG nova.compute.manager [req-7189dba6-4ef3-4efb-a912-aa356a8aafc2 req-f18fda60-2ab2-49bc-b38d-8f809b79f0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:52 np0005539552 nova_compute[233724]: 2025-11-29 08:30:52.858 233728 WARNING nova.compute.manager [req-7189dba6-4ef3-4efb-a912-aa356a8aafc2 req-f18fda60-2ab2-49bc-b38d-8f809b79f0e4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received unexpected event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.021 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for c6851fde-7355-4735-8410-73aadae465f6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.021 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405053.0207815, c6851fde-7355-4735-8410-73aadae465f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.022 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.024 233728 DEBUG nova.compute.manager [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.027 233728 INFO nova.virt.libvirt.driver [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance rebooted successfully.#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.028 233728 DEBUG nova.compute.manager [None req-58abd855-069e-438d-b786-186e055ff041 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.050 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.053 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.075 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.075 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405053.0218382, c6851fde-7355-4735-8410-73aadae465f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.075 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.090 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539552 nova_compute[233724]: 2025-11-29 08:30:53.096 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:53 np0005539552 podman[299885]: 2025-11-29 08:30:53.104746206 +0000 UTC m=+0.076199541 container create 84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:30:53 np0005539552 systemd[1]: Started libpod-conmon-84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae.scope.
Nov 29 03:30:53 np0005539552 podman[299885]: 2025-11-29 08:30:53.067018481 +0000 UTC m=+0.038471876 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:53 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:30:53 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb488ae8577e64966e8103824a2b3f2befcfd988af5fb9c2475980141139cfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:53 np0005539552 podman[299885]: 2025-11-29 08:30:53.19149335 +0000 UTC m=+0.162946715 container init 84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:53 np0005539552 podman[299885]: 2025-11-29 08:30:53.196917106 +0000 UTC m=+0.168370431 container start 84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:53 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [NOTICE]   (299906) : New worker (299908) forked
Nov 29 03:30:53 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [NOTICE]   (299906) : Loading success.
Nov 29 03:30:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:53.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:54.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:54 np0005539552 nova_compute[233724]: 2025-11-29 08:30:54.944 233728 DEBUG nova.compute.manager [req-372018b2-36df-4512-9c81-d1b6492f57c0 req-a5d138be-2e54-422c-aaaa-f92221295e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:54 np0005539552 nova_compute[233724]: 2025-11-29 08:30:54.945 233728 DEBUG oslo_concurrency.lockutils [req-372018b2-36df-4512-9c81-d1b6492f57c0 req-a5d138be-2e54-422c-aaaa-f92221295e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:54 np0005539552 nova_compute[233724]: 2025-11-29 08:30:54.946 233728 DEBUG oslo_concurrency.lockutils [req-372018b2-36df-4512-9c81-d1b6492f57c0 req-a5d138be-2e54-422c-aaaa-f92221295e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:54 np0005539552 nova_compute[233724]: 2025-11-29 08:30:54.947 233728 DEBUG oslo_concurrency.lockutils [req-372018b2-36df-4512-9c81-d1b6492f57c0 req-a5d138be-2e54-422c-aaaa-f92221295e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:54 np0005539552 nova_compute[233724]: 2025-11-29 08:30:54.947 233728 DEBUG nova.compute.manager [req-372018b2-36df-4512-9c81-d1b6492f57c0 req-a5d138be-2e54-422c-aaaa-f92221295e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:54 np0005539552 nova_compute[233724]: 2025-11-29 08:30:54.948 233728 WARNING nova.compute.manager [req-372018b2-36df-4512-9c81-d1b6492f57c0 req-a5d138be-2e54-422c-aaaa-f92221295e16 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received unexpected event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:56.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:56 np0005539552 nova_compute[233724]: 2025-11-29 08:30:56.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:57 np0005539552 nova_compute[233724]: 2025-11-29 08:30:57.096 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:57.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:30:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:59.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:01 np0005539552 nova_compute[233724]: 2025-11-29 08:31:01.695 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:02 np0005539552 nova_compute[233724]: 2025-11-29 08:31:02.099 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:05.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:06.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:06 np0005539552 nova_compute[233724]: 2025-11-29 08:31:06.698 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:06Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:66:0f 10.100.0.13
Nov 29 03:31:07 np0005539552 nova_compute[233724]: 2025-11-29 08:31:07.102 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:07.213 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:07.214 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:31:07 np0005539552 nova_compute[233724]: 2025-11-29 08:31:07.215 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:07.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:08.216 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:08.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:11 np0005539552 nova_compute[233724]: 2025-11-29 08:31:11.200 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:11 np0005539552 nova_compute[233724]: 2025-11-29 08:31:11.701 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:12 np0005539552 nova_compute[233724]: 2025-11-29 08:31:12.104 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:12.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:13 np0005539552 nova_compute[233724]: 2025-11-29 08:31:13.670 233728 INFO nova.compute.manager [None req-ef30c99a-2b0c-47c4-a9cc-98a3296f5df5 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Get console output#033[00m
Nov 29 03:31:13 np0005539552 nova_compute[233724]: 2025-11-29 08:31:13.679 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:31:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:14.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.356 233728 DEBUG nova.compute.manager [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-changed-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.357 233728 DEBUG nova.compute.manager [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Refreshing instance network info cache due to event network-changed-3064c493-912e-4107-90c6-fd25cba7cf44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.357 233728 DEBUG oslo_concurrency.lockutils [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.357 233728 DEBUG oslo_concurrency.lockutils [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.357 233728 DEBUG nova.network.neutron [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Refreshing network info cache for port 3064c493-912e-4107-90c6-fd25cba7cf44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.453 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.454 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.455 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.455 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.456 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.458 233728 INFO nova.compute.manager [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Terminating instance#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.460 233728 DEBUG nova.compute.manager [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:31:14 np0005539552 kernel: tap3064c493-91 (unregistering): left promiscuous mode
Nov 29 03:31:14 np0005539552 NetworkManager[48926]: <info>  [1764405074.7402] device (tap3064c493-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.754 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:14Z|00733|binding|INFO|Releasing lport 3064c493-912e-4107-90c6-fd25cba7cf44 from this chassis (sb_readonly=0)
Nov 29 03:31:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:14Z|00734|binding|INFO|Setting lport 3064c493-912e-4107-90c6-fd25cba7cf44 down in Southbound
Nov 29 03:31:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:14Z|00735|binding|INFO|Removing iface tap3064c493-91 ovn-installed in OVS
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.756 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:14.763 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:66:0f 10.100.0.13'], port_security=['fa:16:3e:b2:66:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c6851fde-7355-4735-8410-73aadae465f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9ada5941-1c85-4f44-ade5-9cc90892652d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5db73f64-426f-4e0b-98ae-aef18864fc6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=3064c493-912e-4107-90c6-fd25cba7cf44) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:14.764 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 3064c493-912e-4107-90c6-fd25cba7cf44 in datapath b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 unbound from our chassis#033[00m
Nov 29 03:31:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:14.766 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:14.767 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[76542429-400c-42d8-83d8-f5ec8af9bb85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:14.768 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 namespace which is not needed anymore#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.769 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:14 np0005539552 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Nov 29 03:31:14 np0005539552 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009a.scope: Consumed 14.548s CPU time.
Nov 29 03:31:14 np0005539552 systemd-machined[196379]: Machine qemu-73-instance-0000009a terminated.
Nov 29 03:31:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2570811076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.896 233728 INFO nova.virt.libvirt.driver [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Instance destroyed successfully.#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.897 233728 DEBUG nova.objects.instance [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid c6851fde-7355-4735-8410-73aadae465f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.920 233728 DEBUG nova.virt.libvirt.vif [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1997328567',display_name='tempest-TestNetworkAdvancedServerOps-server-1997328567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1997328567',id=154,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbTSkSAgnmnPzBXf4q5v2aQuSv4Xg726RvmfA2bFIyqLw5l1ShpVhV1m+XMa6BqUnaJ6e6Oj4O/ixK0Z4BjO5LUyfviwcT1zO/PwUOUOsCHcpb46BmEH+yGI88c1E0nAA==',key_name='tempest-TestNetworkAdvancedServerOps-1084441062',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-vfuyp447',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:53Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=c6851fde-7355-4735-8410-73aadae465f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.921 233728 DEBUG nova.network.os_vif_util [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.922 233728 DEBUG nova.network.os_vif_util [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.922 233728 DEBUG os_vif [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.924 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.925 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3064c493-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:14 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [NOTICE]   (299906) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:14 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [NOTICE]   (299906) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:14 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [WARNING]  (299906) : Exiting Master process...
Nov 29 03:31:14 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [ALERT]    (299906) : Current worker (299908) exited with code 143 (Terminated)
Nov 29 03:31:14 np0005539552 neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62[299902]: [WARNING]  (299906) : All workers exited. Exiting... (0)
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:14 np0005539552 systemd[1]: libpod-84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae.scope: Deactivated successfully.
Nov 29 03:31:14 np0005539552 nova_compute[233724]: 2025-11-29 08:31:14.933 233728 INFO os_vif [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:66:0f,bridge_name='br-int',has_traffic_filtering=True,id=3064c493-912e-4107-90c6-fd25cba7cf44,network=Network(b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3064c493-91')#033[00m
Nov 29 03:31:14 np0005539552 podman[300139]: 2025-11-29 08:31:14.93758833 +0000 UTC m=+0.057740294 container died 84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:31:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay-1bb488ae8577e64966e8103824a2b3f2befcfd988af5fb9c2475980141139cfd-merged.mount: Deactivated successfully.
Nov 29 03:31:14 np0005539552 podman[300139]: 2025-11-29 08:31:14.982893589 +0000 UTC m=+0.103045533 container cleanup 84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:31:14 np0005539552 systemd[1]: libpod-conmon-84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae.scope: Deactivated successfully.
Nov 29 03:31:15 np0005539552 podman[300198]: 2025-11-29 08:31:15.064451664 +0000 UTC m=+0.058981528 container remove 84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.071 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9e430cbd-79a9-4c19-9f05-76f035f15e8c]: (4, ('Sat Nov 29 08:31:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 (84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae)\n84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae\nSat Nov 29 08:31:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 (84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae)\n84885ef7d61a7e9902446bf4a17ab84c921ba345d6e6d4a65bb0c989933bf3ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.073 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec54a8e-ff7e-420d-b017-317da64d72b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.075 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0a6bb86-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:15 np0005539552 kernel: tapb0a6bb86-d0: left promiscuous mode
Nov 29 03:31:15 np0005539552 nova_compute[233724]: 2025-11-29 08:31:15.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:15 np0005539552 nova_compute[233724]: 2025-11-29 08:31:15.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.112 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[72b1b4ed-63fc-4026-acf6-da39a3d8ffe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.129 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a73556dd-6d49-46b4-9989-af4e290d6a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.130 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd71137-94e2-4200-8e34-fb84ec56c96b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.154 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fea5e1d8-df26-4d78-ae14-f84f7bb2cdf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809807, 'reachable_time': 16263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300213, 'error': None, 'target': 'ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.159 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:15 np0005539552 systemd[1]: run-netns-ovnmeta\x2db0a6bb86\x2dd4c8\x2d4c3a\x2d8e0d\x2d7fe39083fb62.mount: Deactivated successfully.
Nov 29 03:31:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:15.159 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[7453def7-0b36-4440-bc94-10c6eed2fcb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:15 np0005539552 nova_compute[233724]: 2025-11-29 08:31:15.239 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.207 233728 DEBUG nova.network.neutron [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updated VIF entry in instance network info cache for port 3064c493-912e-4107-90c6-fd25cba7cf44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.208 233728 DEBUG nova.network.neutron [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updating instance_info_cache with network_info: [{"id": "3064c493-912e-4107-90c6-fd25cba7cf44", "address": "fa:16:3e:b2:66:0f", "network": {"id": "b0a6bb86-d4c8-4c3a-8e0d-7fe39083fb62", "bridge": "br-int", "label": "tempest-network-smoke--1601480889", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3064c493-91", "ovs_interfaceid": "3064c493-912e-4107-90c6-fd25cba7cf44", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.229 233728 DEBUG oslo_concurrency.lockutils [req-3409ee8a-3c91-4986-b216-5ef30ae72aab req-40a006f4-160c-4e2a-bd24-2c11fcf0af20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c6851fde-7355-4735-8410-73aadae465f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:16.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.470 233728 DEBUG nova.compute.manager [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-unplugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.471 233728 DEBUG oslo_concurrency.lockutils [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.471 233728 DEBUG oslo_concurrency.lockutils [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.472 233728 DEBUG oslo_concurrency.lockutils [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.473 233728 DEBUG nova.compute.manager [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-unplugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.473 233728 DEBUG nova.compute.manager [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-unplugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.474 233728 DEBUG nova.compute.manager [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.474 233728 DEBUG oslo_concurrency.lockutils [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c6851fde-7355-4735-8410-73aadae465f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.474 233728 DEBUG oslo_concurrency.lockutils [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.475 233728 DEBUG oslo_concurrency.lockutils [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.475 233728 DEBUG nova.compute.manager [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] No waiting events found dispatching network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.475 233728 WARNING nova.compute.manager [req-b5739584-8fa3-4621-b80a-fd21fead8af8 req-ffc7adcf-30ba-4fd2-9428-9afeb7663802 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received unexpected event network-vif-plugged-3064c493-912e-4107-90c6-fd25cba7cf44 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.704 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.709342) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076709407, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 861, "num_deletes": 254, "total_data_size": 1558994, "memory_usage": 1586576, "flush_reason": "Manual Compaction"}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076720836, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 1027784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56553, "largest_seqno": 57409, "table_properties": {"data_size": 1023736, "index_size": 1764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9710, "raw_average_key_size": 20, "raw_value_size": 1015324, "raw_average_value_size": 2110, "num_data_blocks": 77, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405023, "oldest_key_time": 1764405023, "file_creation_time": 1764405076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 11597 microseconds, and 5412 cpu microseconds.
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.720937) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 1027784 bytes OK
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.720981) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.723316) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.723332) EVENT_LOG_v1 {"time_micros": 1764405076723326, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.723350) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1554489, prev total WAL file size 1554489, number of live WAL files 2.
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.724133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(1003KB)], [111(11MB)]
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076724193, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12865618, "oldest_snapshot_seqno": -1}
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.726 233728 INFO nova.virt.libvirt.driver [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Deleting instance files /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6_del#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.727 233728 INFO nova.virt.libvirt.driver [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Deletion of /var/lib/nova/instances/c6851fde-7355-4735-8410-73aadae465f6_del complete#033[00m
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8712 keys, 11015122 bytes, temperature: kUnknown
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076796748, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 11015122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10959365, "index_size": 32861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 228064, "raw_average_key_size": 26, "raw_value_size": 10806388, "raw_average_value_size": 1240, "num_data_blocks": 1272, "num_entries": 8712, "num_filter_entries": 8712, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.796983) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11015122 bytes
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.798349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.2 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.3 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 9236, records dropped: 524 output_compression: NoCompression
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.798392) EVENT_LOG_v1 {"time_micros": 1764405076798376, "job": 70, "event": "compaction_finished", "compaction_time_micros": 72616, "compaction_time_cpu_micros": 29443, "output_level": 6, "num_output_files": 1, "total_output_size": 11015122, "num_input_records": 9236, "num_output_records": 8712, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076798807, "job": 70, "event": "table_file_deletion", "file_number": 113}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076801202, "job": 70, "event": "table_file_deletion", "file_number": 111}
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.724040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.801272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.801279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.801283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.801286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:31:16.801289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.807 233728 INFO nova.compute.manager [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Took 2.35 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.807 233728 DEBUG oslo.service.loopingcall [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.808 233728 DEBUG nova.compute.manager [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:31:16 np0005539552 nova_compute[233724]: 2025-11-29 08:31:16.808 233728 DEBUG nova.network.neutron [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.296 233728 DEBUG nova.network.neutron [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.323 233728 INFO nova.compute.manager [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] Took 0.51 seconds to deallocate network for instance.#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.370 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.371 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.407 233728 DEBUG nova.compute.manager [req-32a802d5-4870-4b8f-b6a2-838c9654f758 req-e98df614-1f01-4797-88b1-23b822c35391 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c6851fde-7355-4735-8410-73aadae465f6] Received event network-vif-deleted-3064c493-912e-4107-90c6-fd25cba7cf44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.431 233728 DEBUG oslo_concurrency.processutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:17 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1019565276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.852 233728 DEBUG oslo_concurrency.processutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.860 233728 DEBUG nova.compute.provider_tree [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.880 233728 DEBUG nova.scheduler.client.report [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.899 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.919 233728 INFO nova.scheduler.client.report [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance c6851fde-7355-4735-8410-73aadae465f6#033[00m
Nov 29 03:31:17 np0005539552 nova_compute[233724]: 2025-11-29 08:31:17.996 233728 DEBUG oslo_concurrency.lockutils [None req-476d5cb4-ca88-455f-859f-7c6731bdd2c6 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "c6851fde-7355-4735-8410-73aadae465f6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:18.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:19.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:19 np0005539552 nova_compute[233724]: 2025-11-29 08:31:19.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:20.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:20.637 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:20.637 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:20.638 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:21.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:21Z|00736|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:31:21 np0005539552 nova_compute[233724]: 2025-11-29 08:31:21.594 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539552 nova_compute[233724]: 2025-11-29 08:31:21.798 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:21Z|00737|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:31:21 np0005539552 nova_compute[233724]: 2025-11-29 08:31:21.809 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:22 np0005539552 podman[300294]: 2025-11-29 08:31:22.006951756 +0000 UTC m=+0.092709395 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:31:22 np0005539552 podman[300295]: 2025-11-29 08:31:22.022753741 +0000 UTC m=+0.095589313 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 29 03:31:22 np0005539552 podman[300293]: 2025-11-29 08:31:22.023727017 +0000 UTC m=+0.105558671 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:31:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:22.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:24 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:31:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:24.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:24 np0005539552 nova_compute[233724]: 2025-11-29 08:31:24.934 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:25.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:26.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:26 np0005539552 nova_compute[233724]: 2025-11-29 08:31:26.800 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:27.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017340376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:29 np0005539552 nova_compute[233724]: 2025-11-29 08:31:29.895 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405074.89409, c6851fde-7355-4735-8410-73aadae465f6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:29 np0005539552 nova_compute[233724]: 2025-11-29 08:31:29.896 233728 INFO nova.compute.manager [-] [instance: c6851fde-7355-4735-8410-73aadae465f6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:31:29 np0005539552 nova_compute[233724]: 2025-11-29 08:31:29.916 233728 DEBUG nova.compute.manager [None req-1cd06337-da52-4b72-9c31-2f5f70f9b9ba - - - - - -] [instance: c6851fde-7355-4735-8410-73aadae465f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:29 np0005539552 nova_compute[233724]: 2025-11-29 08:31:29.937 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:30.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:31.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:31 np0005539552 nova_compute[233724]: 2025-11-29 08:31:31.802 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:32.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:33.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:34.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:34 np0005539552 nova_compute[233724]: 2025-11-29 08:31:34.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:35.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.508 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.509 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.524 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.602 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.603 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.614 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.615 233728 INFO nova.compute.claims [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.766 233728 DEBUG nova.scheduler.client.report [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.789 233728 DEBUG nova.scheduler.client.report [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.789 233728 DEBUG nova.compute.provider_tree [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.800 233728 DEBUG nova.scheduler.client.report [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.804 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.820 233728 DEBUG nova.scheduler.client.report [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.872 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:36 np0005539552 nova_compute[233724]: 2025-11-29 08:31:36.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3821788446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.384 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.392 233728 DEBUG nova.compute.provider_tree [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.407 233728 DEBUG nova.scheduler.client.report [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.433 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.434 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.438 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.439 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.439 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.439 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:37.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.523 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.524 233728 DEBUG nova.network.neutron [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.548 233728 INFO nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.569 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.648 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.651 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.651 233728 INFO nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Creating image(s)#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.690 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.722 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.749 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.753 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.837 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.838 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.839 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.839 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.864 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.867 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2812527462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.911 233728 DEBUG nova.policy [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.915 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.986 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:37 np0005539552 nova_compute[233724]: 2025-11-29 08:31:37.987 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.136 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.137 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4061MB free_disk=20.852489471435547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.137 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.137 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.190 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 101e7b80-d529-4f2a-87df-44512ead5b00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.191 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 0728e9c2-5ec4-413b-aca6-7ee6cd645689 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.191 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.191 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.242 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.489 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.580 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.716 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.724 233728 DEBUG nova.objects.instance [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.737 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.745 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.746 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Ensure instance console log exists: /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.746 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.747 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.747 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.757 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.789 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:31:38 np0005539552 nova_compute[233724]: 2025-11-29 08:31:38.789 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:31:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394469888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:31:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:31:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394469888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:31:39 np0005539552 nova_compute[233724]: 2025-11-29 08:31:39.005 233728 DEBUG nova.network.neutron [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Successfully created port: 1fb50269-75d2-4b10-bc29-bc1d14be7a5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:31:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:39.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:39 np0005539552 nova_compute[233724]: 2025-11-29 08:31:39.947 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.103 233728 DEBUG nova.network.neutron [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Successfully updated port: 1fb50269-75d2-4b10-bc29-bc1d14be7a5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.120 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.121 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.121 233728 DEBUG nova.network.neutron [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.242 233728 DEBUG nova.compute.manager [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-changed-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.243 233728 DEBUG nova.compute.manager [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Refreshing instance network info cache due to event network-changed-1fb50269-75d2-4b10-bc29-bc1d14be7a5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.243 233728 DEBUG oslo_concurrency.lockutils [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:40 np0005539552 nova_compute[233724]: 2025-11-29 08:31:40.332 233728 DEBUG nova.network.neutron [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:31:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.242 233728 DEBUG nova.network.neutron [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updating instance_info_cache with network_info: [{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.260 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.261 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance network_info: |[{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.261 233728 DEBUG oslo_concurrency.lockutils [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.262 233728 DEBUG nova.network.neutron [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Refreshing network info cache for port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.267 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Start _get_guest_xml network_info=[{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.274 233728 WARNING nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.283 233728 DEBUG nova.virt.libvirt.host [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.284 233728 DEBUG nova.virt.libvirt.host [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.289 233728 DEBUG nova.virt.libvirt.host [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.290 233728 DEBUG nova.virt.libvirt.host [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.291 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.291 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.292 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.292 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.292 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.292 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.293 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.293 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.293 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.293 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.293 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.294 233728 DEBUG nova.virt.hardware [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.296 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2213307239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.799 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.830 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.834 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:41 np0005539552 nova_compute[233724]: 2025-11-29 08:31:41.880 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1550465445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:42.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.516 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.518 233728 DEBUG nova.virt.libvirt.vif [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-372991495',display_name='tempest-TestNetworkAdvancedServerOps-server-372991495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-372991495',id=160,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBT1pv+IY7jQTScxIMh5OvDUzH662JY1o2XQaNpf6yf2TjBvg2UagoYUJ4t7fuQIr5Qy1J4DOxtOPLrI+zJHAFWB2CqZtlO0XcZKiDQ7qxEMZSSyPr2ai/8DAEe8syPj1w==',key_name='tempest-TestNetworkAdvancedServerOps-1533546898',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-8uo5g4ct',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:37Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=0728e9c2-5ec4-413b-aca6-7ee6cd645689,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.519 233728 DEBUG nova.network.os_vif_util [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.520 233728 DEBUG nova.network.os_vif_util [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.521 233728 DEBUG nova.objects.instance [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.530 233728 DEBUG nova.network.neutron [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updated VIF entry in instance network info cache for port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.530 233728 DEBUG nova.network.neutron [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updating instance_info_cache with network_info: [{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.537 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <uuid>0728e9c2-5ec4-413b-aca6-7ee6cd645689</uuid>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <name>instance-000000a0</name>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-372991495</nova:name>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:31:41</nova:creationTime>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <nova:port uuid="1fb50269-75d2-4b10-bc29-bc1d14be7a5e">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <entry name="serial">0728e9c2-5ec4-413b-aca6-7ee6cd645689</entry>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <entry name="uuid">0728e9c2-5ec4-413b-aca6-7ee6cd645689</entry>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk.config">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:8e:5e:c5"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <target dev="tap1fb50269-75"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/console.log" append="off"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:31:42 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:31:42 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:31:42 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:31:42 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.538 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Preparing to wait for external event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.539 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.539 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.540 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.540 233728 DEBUG nova.virt.libvirt.vif [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-372991495',display_name='tempest-TestNetworkAdvancedServerOps-server-372991495',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-372991495',id=160,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBT1pv+IY7jQTScxIMh5OvDUzH662JY1o2XQaNpf6yf2TjBvg2UagoYUJ4t7fuQIr5Qy1J4DOxtOPLrI+zJHAFWB2CqZtlO0XcZKiDQ7qxEMZSSyPr2ai/8DAEe8syPj1w==',key_name='tempest-TestNetworkAdvancedServerOps-1533546898',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-8uo5g4ct',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:37Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=0728e9c2-5ec4-413b-aca6-7ee6cd645689,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.541 233728 DEBUG nova.network.os_vif_util [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.542 233728 DEBUG nova.network.os_vif_util [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.542 233728 DEBUG os_vif [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.543 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.543 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.544 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.546 233728 DEBUG oslo_concurrency.lockutils [req-ceed51a8-272d-4726-b927-df2cdd41aa3c req-4d626ad1-c516-47dc-be96-36671997d4bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.548 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.548 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fb50269-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.549 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1fb50269-75, col_values=(('external_ids', {'iface-id': '1fb50269-75d2-4b10-bc29-bc1d14be7a5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:5e:c5', 'vm-uuid': '0728e9c2-5ec4-413b-aca6-7ee6cd645689'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.551 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539552 NetworkManager[48926]: <info>  [1764405102.5521] manager: (tap1fb50269-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.559 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.560 233728 INFO os_vif [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75')#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.622 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.622 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.623 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:8e:5e:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.623 233728 INFO nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Using config drive#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.654 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.789 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.790 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:42 np0005539552 nova_compute[233724]: 2025-11-29 08:31:42.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.018 233728 INFO nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Creating config drive at /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/disk.config#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.029 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp882e92s1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.174 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp882e92s1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.218 233728 DEBUG nova.storage.rbd_utils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.222 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/disk.config 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.370 233728 DEBUG oslo_concurrency.processutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/disk.config 0728e9c2-5ec4-413b-aca6-7ee6cd645689_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.371 233728 INFO nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Deleting local config drive /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689/disk.config because it was imported into RBD.#033[00m
Nov 29 03:31:43 np0005539552 kernel: tap1fb50269-75: entered promiscuous mode
Nov 29 03:31:43 np0005539552 NetworkManager[48926]: <info>  [1764405103.4195] manager: (tap1fb50269-75): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.419 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:43Z|00738|binding|INFO|Claiming lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e for this chassis.
Nov 29 03:31:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:43Z|00739|binding|INFO|1fb50269-75d2-4b10-bc29-bc1d14be7a5e: Claiming fa:16:3e:8e:5e:c5 10.100.0.10
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.432 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:c5 10.100.0.10'], port_security=['fa:16:3e:8e:5e:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0728e9c2-5ec4-413b-aca6-7ee6cd645689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd914ffe7-3081-4101-a85e-37d351e89940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9fba1be-70a7-4a1c-89ef-e87d2756a5e7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1fb50269-75d2-4b10-bc29-bc1d14be7a5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.433 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e in datapath 72c7496c-edc3-4e9f-b7d4-5c83cefc4119 bound to our chassis#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.435 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72c7496c-edc3-4e9f-b7d4-5c83cefc4119#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.449 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7fc6f4-7762-47cc-9be2-fbe0121d03eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.450 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72c7496c-e1 in ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.452 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72c7496c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.452 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d95a68-c86a-45d7-b5fb-971b56af023c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.453 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e91408ad-00fd-4e85-828a-97a3a8cbdb3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 systemd-machined[196379]: New machine qemu-74-instance-000000a0.
Nov 29 03:31:43 np0005539552 systemd-udevd[300838]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.466 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[170260cb-6978-42bf-adf1-399ceb3a8968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 systemd[1]: Started Virtual Machine qemu-74-instance-000000a0.
Nov 29 03:31:43 np0005539552 NetworkManager[48926]: <info>  [1764405103.4707] device (tap1fb50269-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:43 np0005539552 NetworkManager[48926]: <info>  [1764405103.4717] device (tap1fb50269-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.497 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6e86cf-452d-483c-b98a-b35acf05129b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:43.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:43Z|00740|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e ovn-installed in OVS
Nov 29 03:31:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:43Z|00741|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e up in Southbound
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.515 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.531 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[df3d3870-73ed-487c-965c-f6a54d33cc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 systemd-udevd[300841]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.536 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6528ccdd-4b97-49df-b652-8d6e0d17cb32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 NetworkManager[48926]: <info>  [1764405103.5382] manager: (tap72c7496c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.567 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bf999b21-d19d-4cbb-a97b-6bda9c324061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.570 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e4124b-cefa-49e1-b467-fb2bf520857c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 NetworkManager[48926]: <info>  [1764405103.6006] device (tap72c7496c-e0): carrier: link connected
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.609 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[13461b66-0adb-4ce7-a4d1-b90749cf1c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.634 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ff15802e-8acd-4755-af06-eacbc4964a02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c7496c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:53:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814928, 'reachable_time': 22436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300872, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.647 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d2da64-1b3e-4920-abd0-734af73681d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:5356'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 814928, 'tstamp': 814928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300873, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.671 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[46ff547e-daa6-4cdd-a0af-92c3d39b3bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c7496c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:53:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814928, 'reachable_time': 22436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300874, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.689 233728 DEBUG nova.compute.manager [req-55d453b2-a78c-44e2-b669-75860f5cb8a1 req-1247755b-5162-4494-9271-ee0ca53cdfd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.690 233728 DEBUG oslo_concurrency.lockutils [req-55d453b2-a78c-44e2-b669-75860f5cb8a1 req-1247755b-5162-4494-9271-ee0ca53cdfd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.691 233728 DEBUG oslo_concurrency.lockutils [req-55d453b2-a78c-44e2-b669-75860f5cb8a1 req-1247755b-5162-4494-9271-ee0ca53cdfd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.692 233728 DEBUG oslo_concurrency.lockutils [req-55d453b2-a78c-44e2-b669-75860f5cb8a1 req-1247755b-5162-4494-9271-ee0ca53cdfd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.692 233728 DEBUG nova.compute.manager [req-55d453b2-a78c-44e2-b669-75860f5cb8a1 req-1247755b-5162-4494-9271-ee0ca53cdfd4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Processing event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.714 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2146b762-5b0f-4d5f-ae87-a97c5782e322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.763 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8d4903-9671-4b4a-96d0-c5e0356d864a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.764 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c7496c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.764 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.765 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72c7496c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:43 np0005539552 NetworkManager[48926]: <info>  [1764405103.7675] manager: (tap72c7496c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Nov 29 03:31:43 np0005539552 kernel: tap72c7496c-e0: entered promiscuous mode
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.766 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.771 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72c7496c-e0, col_values=(('external_ids', {'iface-id': '950fb150-ff30-4c09-b1fb-acc48272e896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.772 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:43Z|00742|binding|INFO|Releasing lport 950fb150-ff30-4c09-b1fb-acc48272e896 from this chassis (sb_readonly=0)
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.786 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.787 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.788 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4013413a-e384-46bb-ae0b-e2ad5dded6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.789 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-72c7496c-edc3-4e9f-b7d4-5c83cefc4119
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.pid.haproxy
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 72c7496c-edc3-4e9f-b7d4-5c83cefc4119
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:43.789 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'env', 'PROCESS_TAG=haproxy-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.927 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.928 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405103.9281046, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.928 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.933 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.936 233728 INFO nova.virt.libvirt.driver [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance spawned successfully.#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.937 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.949 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.960 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.965 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.965 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.966 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.966 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.967 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.967 233728 DEBUG nova.virt.libvirt.driver [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.996 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.997 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405103.928857, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:43 np0005539552 nova_compute[233724]: 2025-11-29 08:31:43.997 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.024 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.028 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405103.9319751, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.028 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.052 233728 INFO nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Took 6.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.052 233728 DEBUG nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.092 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.095 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.130 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.150 233728 INFO nova.compute.manager [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Took 7.58 seconds to build instance.#033[00m
Nov 29 03:31:44 np0005539552 podman[300948]: 2025-11-29 08:31:44.162048191 +0000 UTC m=+0.057456767 container create 2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:44 np0005539552 nova_compute[233724]: 2025-11-29 08:31:44.170 233728 DEBUG oslo_concurrency.lockutils [None req-1fb6a1e0-f526-462f-8720-e99e96839ae9 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:44 np0005539552 systemd[1]: Started libpod-conmon-2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186.scope.
Nov 29 03:31:44 np0005539552 podman[300948]: 2025-11-29 08:31:44.132385873 +0000 UTC m=+0.027794449 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:44 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:31:44 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbbca3329ea61f260beb2bd3cc1555b2231ba95a498d361ba7ebc3013e085b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:44 np0005539552 podman[300948]: 2025-11-29 08:31:44.262989817 +0000 UTC m=+0.158398443 container init 2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:44 np0005539552 podman[300948]: 2025-11-29 08:31:44.269722418 +0000 UTC m=+0.165131024 container start 2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:31:44 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [NOTICE]   (300965) : New worker (300967) forked
Nov 29 03:31:44 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [NOTICE]   (300965) : Loading success.
Nov 29 03:31:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:45.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:45 np0005539552 nova_compute[233724]: 2025-11-29 08:31:45.957 233728 DEBUG nova.compute.manager [req-d345e062-1bc9-43a1-8114-87e40bbde9c3 req-92546296-158a-4978-8c24-31d1f232372e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:45 np0005539552 nova_compute[233724]: 2025-11-29 08:31:45.958 233728 DEBUG oslo_concurrency.lockutils [req-d345e062-1bc9-43a1-8114-87e40bbde9c3 req-92546296-158a-4978-8c24-31d1f232372e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:45 np0005539552 nova_compute[233724]: 2025-11-29 08:31:45.959 233728 DEBUG oslo_concurrency.lockutils [req-d345e062-1bc9-43a1-8114-87e40bbde9c3 req-92546296-158a-4978-8c24-31d1f232372e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:45 np0005539552 nova_compute[233724]: 2025-11-29 08:31:45.959 233728 DEBUG oslo_concurrency.lockutils [req-d345e062-1bc9-43a1-8114-87e40bbde9c3 req-92546296-158a-4978-8c24-31d1f232372e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:45 np0005539552 nova_compute[233724]: 2025-11-29 08:31:45.959 233728 DEBUG nova.compute.manager [req-d345e062-1bc9-43a1-8114-87e40bbde9c3 req-92546296-158a-4978-8c24-31d1f232372e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:45 np0005539552 nova_compute[233724]: 2025-11-29 08:31:45.960 233728 WARNING nova.compute.manager [req-d345e062-1bc9-43a1-8114-87e40bbde9c3 req-92546296-158a-4978-8c24-31d1f232372e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:31:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:46 np0005539552 NetworkManager[48926]: <info>  [1764405106.6551] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Nov 29 03:31:46 np0005539552 NetworkManager[48926]: <info>  [1764405106.6566] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Nov 29 03:31:46 np0005539552 nova_compute[233724]: 2025-11-29 08:31:46.664 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:46 np0005539552 nova_compute[233724]: 2025-11-29 08:31:46.822 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:46 np0005539552 nova_compute[233724]: 2025-11-29 08:31:46.826 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:46Z|00743|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:31:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:46Z|00744|binding|INFO|Releasing lport 950fb150-ff30-4c09-b1fb-acc48272e896 from this chassis (sb_readonly=0)
Nov 29 03:31:46 np0005539552 nova_compute[233724]: 2025-11-29 08:31:46.845 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:47 np0005539552 nova_compute[233724]: 2025-11-29 08:31:47.550 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539552 nova_compute[233724]: 2025-11-29 08:31:47.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.099 233728 DEBUG nova.compute.manager [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-changed-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.100 233728 DEBUG nova.compute.manager [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Refreshing instance network info cache due to event network-changed-1fb50269-75d2-4b10-bc29-bc1d14be7a5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.100 233728 DEBUG oslo_concurrency.lockutils [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.100 233728 DEBUG oslo_concurrency.lockutils [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.100 233728 DEBUG nova.network.neutron [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Refreshing network info cache for port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:48.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:31:48 np0005539552 nova_compute[233724]: 2025-11-29 08:31:48.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:31:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:49.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:49 np0005539552 nova_compute[233724]: 2025-11-29 08:31:49.529 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:49 np0005539552 nova_compute[233724]: 2025-11-29 08:31:49.529 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:49 np0005539552 nova_compute[233724]: 2025-11-29 08:31:49.530 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:31:49 np0005539552 nova_compute[233724]: 2025-11-29 08:31:49.531 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 101e7b80-d529-4f2a-87df-44512ead5b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:50 np0005539552 nova_compute[233724]: 2025-11-29 08:31:50.919 233728 DEBUG nova.network.neutron [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updated VIF entry in instance network info cache for port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:50 np0005539552 nova_compute[233724]: 2025-11-29 08:31:50.921 233728 DEBUG nova.network.neutron [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updating instance_info_cache with network_info: [{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:50 np0005539552 nova_compute[233724]: 2025-11-29 08:31:50.942 233728 DEBUG oslo_concurrency.lockutils [req-6a7071a7-35d5-45cd-8de2-7a3ef92edbb6 req-4d695fa0-5ca3-4c0e-ae55-e5ba2d66b10f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:51 np0005539552 nova_compute[233724]: 2025-11-29 08:31:51.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:52 np0005539552 nova_compute[233724]: 2025-11-29 08:31:52.552 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539552 nova_compute[233724]: 2025-11-29 08:31:52.954 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updating instance_info_cache with network_info: [{"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:52 np0005539552 nova_compute[233724]: 2025-11-29 08:31:52.968 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-101e7b80-d529-4f2a-87df-44512ead5b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:52 np0005539552 nova_compute[233724]: 2025-11-29 08:31:52.969 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:31:52 np0005539552 podman[300985]: 2025-11-29 08:31:52.986756026 +0000 UTC m=+0.068466623 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:31:52 np0005539552 podman[300984]: 2025-11-29 08:31:52.993565559 +0000 UTC m=+0.077087315 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:31:53 np0005539552 podman[300986]: 2025-11-29 08:31:53.025592051 +0000 UTC m=+0.105774617 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:55.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:56.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:56 np0005539552 nova_compute[233724]: 2025-11-29 08:31:56.832 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:57.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:57Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:5e:c5 10.100.0.10
Nov 29 03:31:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:57Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:5e:c5 10.100.0.10
Nov 29 03:31:57 np0005539552 nova_compute[233724]: 2025-11-29 08:31:57.554 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:58.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.543 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.544 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.544 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.545 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.545 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.547 233728 INFO nova.compute.manager [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Terminating instance#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.549 233728 DEBUG nova.compute.manager [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:31:58 np0005539552 kernel: tap41f8db2f-85 (unregistering): left promiscuous mode
Nov 29 03:31:58 np0005539552 NetworkManager[48926]: <info>  [1764405118.6115] device (tap41f8db2f-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:58Z|00745|binding|INFO|Releasing lport 41f8db2f-85ae-4916-91a0-fedefca2c76e from this chassis (sb_readonly=0)
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.629 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:58Z|00746|binding|INFO|Setting lport 41f8db2f-85ae-4916-91a0-fedefca2c76e down in Southbound
Nov 29 03:31:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:31:58Z|00747|binding|INFO|Removing iface tap41f8db2f-85 ovn-installed in OVS
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.632 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.637 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:ca 10.100.0.5'], port_security=['fa:16:3e:02:b7:ca 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '101e7b80-d529-4f2a-87df-44512ead5b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=41f8db2f-85ae-4916-91a0-fedefca2c76e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.639 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 41f8db2f-85ae-4916-91a0-fedefca2c76e in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.640 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14ea2b48-9984-443b-82fc-568ae98723fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.642 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ba942f1f-74f5-4e5b-989c-a1996ab2d104]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.643 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace which is not needed anymore#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.664 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 29 03:31:58 np0005539552 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000093.scope: Consumed 21.558s CPU time.
Nov 29 03:31:58 np0005539552 systemd-machined[196379]: Machine qemu-69-instance-00000093 terminated.
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.774 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.790 233728 INFO nova.virt.libvirt.driver [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Instance destroyed successfully.#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.790 233728 DEBUG nova.objects.instance [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 101e7b80-d529-4f2a-87df-44512ead5b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:58 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [NOTICE]   (296699) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:58 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [NOTICE]   (296699) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:58 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [WARNING]  (296699) : Exiting Master process...
Nov 29 03:31:58 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [WARNING]  (296699) : Exiting Master process...
Nov 29 03:31:58 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [ALERT]    (296699) : Current worker (296701) exited with code 143 (Terminated)
Nov 29 03:31:58 np0005539552 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[296695]: [WARNING]  (296699) : All workers exited. Exiting... (0)
Nov 29 03:31:58 np0005539552 systemd[1]: libpod-6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322.scope: Deactivated successfully.
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.812 233728 DEBUG nova.virt.libvirt.vif [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-594734656',display_name='tempest-₡-594734656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest--594734656',id=147,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:06Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-lqhdbprg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:06Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=101e7b80-d529-4f2a-87df-44512ead5b00,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.813 233728 DEBUG nova.network.os_vif_util [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "address": "fa:16:3e:02:b7:ca", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41f8db2f-85", "ovs_interfaceid": "41f8db2f-85ae-4916-91a0-fedefca2c76e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.813 233728 DEBUG nova.network.os_vif_util [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.814 233728 DEBUG os_vif [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.815 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41f8db2f-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.819 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:58 np0005539552 podman[301069]: 2025-11-29 08:31:58.819722585 +0000 UTC m=+0.066513530 container died 6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.824 233728 INFO os_vif [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:b7:ca,bridge_name='br-int',has_traffic_filtering=True,id=41f8db2f-85ae-4916-91a0-fedefca2c76e,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41f8db2f-85')#033[00m
Nov 29 03:31:58 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:58 np0005539552 systemd[1]: var-lib-containers-storage-overlay-560a9a27e62ef0431177964e4874761eba8a850e56c313ced87e439b5cfaec5e-merged.mount: Deactivated successfully.
Nov 29 03:31:58 np0005539552 podman[301069]: 2025-11-29 08:31:58.870791349 +0000 UTC m=+0.117582254 container cleanup 6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:31:58 np0005539552 systemd[1]: libpod-conmon-6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322.scope: Deactivated successfully.
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.911 233728 DEBUG nova.compute.manager [req-438dc95b-8f48-4f48-9514-c770690e7b44 req-399a966c-1fb2-47de-9455-27d69e464347 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-vif-unplugged-41f8db2f-85ae-4916-91a0-fedefca2c76e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.911 233728 DEBUG oslo_concurrency.lockutils [req-438dc95b-8f48-4f48-9514-c770690e7b44 req-399a966c-1fb2-47de-9455-27d69e464347 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.912 233728 DEBUG oslo_concurrency.lockutils [req-438dc95b-8f48-4f48-9514-c770690e7b44 req-399a966c-1fb2-47de-9455-27d69e464347 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.912 233728 DEBUG oslo_concurrency.lockutils [req-438dc95b-8f48-4f48-9514-c770690e7b44 req-399a966c-1fb2-47de-9455-27d69e464347 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.912 233728 DEBUG nova.compute.manager [req-438dc95b-8f48-4f48-9514-c770690e7b44 req-399a966c-1fb2-47de-9455-27d69e464347 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] No waiting events found dispatching network-vif-unplugged-41f8db2f-85ae-4916-91a0-fedefca2c76e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.913 233728 DEBUG nova.compute.manager [req-438dc95b-8f48-4f48-9514-c770690e7b44 req-399a966c-1fb2-47de-9455-27d69e464347 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-vif-unplugged-41f8db2f-85ae-4916-91a0-fedefca2c76e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:31:58 np0005539552 podman[301127]: 2025-11-29 08:31:58.943714501 +0000 UTC m=+0.050236522 container remove 6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.948 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[170aa806-aadf-4ba0-9bc9-e7f06943af58]: (4, ('Sat Nov 29 08:31:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322)\n6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322\nSat Nov 29 08:31:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322)\n6b939d1c566d08ee86a333a4843c3759d3bca4e7e8e437c4927e27348cf24322\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.950 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8f609509-8efe-4e11-bd1c-185ca1967be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.951 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.954 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 kernel: tap14ea2b48-90: left promiscuous mode
Nov 29 03:31:58 np0005539552 nova_compute[233724]: 2025-11-29 08:31:58.972 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.975 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0e149c73-5b58-4d14-a263-d93e327de215]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.996 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[faa4f63d-b470-464c-b550-8d269778073b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:58.997 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d731e1d7-324f-4143-890c-62db909e6bc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:59.021 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4204b9-6f7b-4d9a-afc6-8fea8b287e72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799129, 'reachable_time': 28398, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301142, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:59.024 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:31:59.024 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e451fb4f-cdb6-4a0e-bd33-476d7aa0cac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:59 np0005539552 systemd[1]: run-netns-ovnmeta\x2d14ea2b48\x2d9984\x2d443b\x2d82fc\x2d568ae98723fc.mount: Deactivated successfully.
Nov 29 03:31:59 np0005539552 nova_compute[233724]: 2025-11-29 08:31:59.217 233728 INFO nova.virt.libvirt.driver [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Deleting instance files /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00_del#033[00m
Nov 29 03:31:59 np0005539552 nova_compute[233724]: 2025-11-29 08:31:59.218 233728 INFO nova.virt.libvirt.driver [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Deletion of /var/lib/nova/instances/101e7b80-d529-4f2a-87df-44512ead5b00_del complete#033[00m
Nov 29 03:31:59 np0005539552 nova_compute[233724]: 2025-11-29 08:31:59.277 233728 INFO nova.compute.manager [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:31:59 np0005539552 nova_compute[233724]: 2025-11-29 08:31:59.278 233728 DEBUG oslo.service.loopingcall [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:31:59 np0005539552 nova_compute[233724]: 2025-11-29 08:31:59.278 233728 DEBUG nova.compute.manager [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:31:59 np0005539552 nova_compute[233724]: 2025-11-29 08:31:59.279 233728 DEBUG nova.network.neutron [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:31:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:31:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:59.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:00.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.181 233728 DEBUG nova.compute.manager [req-7f3143c3-615a-4bdf-9a84-f11755879dea req-5b453f72-2fa9-4718-9716-76ad679a9ce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.181 233728 DEBUG oslo_concurrency.lockutils [req-7f3143c3-615a-4bdf-9a84-f11755879dea req-5b453f72-2fa9-4718-9716-76ad679a9ce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.181 233728 DEBUG oslo_concurrency.lockutils [req-7f3143c3-615a-4bdf-9a84-f11755879dea req-5b453f72-2fa9-4718-9716-76ad679a9ce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.181 233728 DEBUG oslo_concurrency.lockutils [req-7f3143c3-615a-4bdf-9a84-f11755879dea req-5b453f72-2fa9-4718-9716-76ad679a9ce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.182 233728 DEBUG nova.compute.manager [req-7f3143c3-615a-4bdf-9a84-f11755879dea req-5b453f72-2fa9-4718-9716-76ad679a9ce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] No waiting events found dispatching network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.182 233728 WARNING nova.compute.manager [req-7f3143c3-615a-4bdf-9a84-f11755879dea req-5b453f72-2fa9-4718-9716-76ad679a9ce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received unexpected event network-vif-plugged-41f8db2f-85ae-4916-91a0-fedefca2c76e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.195 233728 DEBUG nova.network.neutron [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.213 233728 INFO nova.compute.manager [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Took 1.93 seconds to deallocate network for instance.#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.262 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.262 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.325 233728 DEBUG oslo_concurrency.processutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/696573775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.773 233728 DEBUG oslo_concurrency.processutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.781 233728 DEBUG nova.compute.provider_tree [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.802 233728 DEBUG nova.scheduler.client.report [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.835 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.840 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.864 233728 INFO nova.scheduler.client.report [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 101e7b80-d529-4f2a-87df-44512ead5b00#033[00m
Nov 29 03:32:01 np0005539552 nova_compute[233724]: 2025-11-29 08:32:01.942 233728 DEBUG oslo_concurrency.lockutils [None req-297b8896-a162-49fa-b7c3-20d076a8b909 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "101e7b80-d529-4f2a-87df-44512ead5b00" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.440 233728 INFO nova.compute.manager [None req-a1680325-84eb-45d1-872c-789a02e84e93 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Get console output#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.447 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.751 233728 DEBUG nova.objects.instance [None req-1c495288-630f-4d7c-b825-82758077eca1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.779 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405122.778575, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.779 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.823 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.829 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:02 np0005539552 nova_compute[233724]: 2025-11-29 08:32:02.857 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:32:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.274 233728 DEBUG nova.compute.manager [req-67f0d533-b991-48e2-b5d6-eda373a140b8 req-b4378963-e7c7-4538-ab67-1930f9f6a058 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Received event network-vif-deleted-41f8db2f-85ae-4916-91a0-fedefca2c76e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:03 np0005539552 kernel: tap1fb50269-75 (unregistering): left promiscuous mode
Nov 29 03:32:03 np0005539552 NetworkManager[48926]: <info>  [1764405123.5130] device (tap1fb50269-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:32:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:03Z|00748|binding|INFO|Releasing lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e from this chassis (sb_readonly=0)
Nov 29 03:32:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:03Z|00749|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e down in Southbound
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.518 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:03Z|00750|binding|INFO|Removing iface tap1fb50269-75 ovn-installed in OVS
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.520 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.525 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:c5 10.100.0.10'], port_security=['fa:16:3e:8e:5e:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0728e9c2-5ec4-413b-aca6-7ee6cd645689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd914ffe7-3081-4101-a85e-37d351e89940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9fba1be-70a7-4a1c-89ef-e87d2756a5e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1fb50269-75d2-4b10-bc29-bc1d14be7a5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.527 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e in datapath 72c7496c-edc3-4e9f-b7d4-5c83cefc4119 unbound from our chassis#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.529 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c7496c-edc3-4e9f-b7d4-5c83cefc4119, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.531 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26b816e5-4f8d-480d-b391-2fb76959358c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.531 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 namespace which is not needed anymore#033[00m
Nov 29 03:32:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.556 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Nov 29 03:32:03 np0005539552 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a0.scope: Consumed 14.268s CPU time.
Nov 29 03:32:03 np0005539552 systemd-machined[196379]: Machine qemu-74-instance-000000a0 terminated.
Nov 29 03:32:03 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [NOTICE]   (300965) : haproxy version is 2.8.14-c23fe91
Nov 29 03:32:03 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [NOTICE]   (300965) : path to executable is /usr/sbin/haproxy
Nov 29 03:32:03 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [WARNING]  (300965) : Exiting Master process...
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.685 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [ALERT]    (300965) : Current worker (300967) exited with code 143 (Terminated)
Nov 29 03:32:03 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[300961]: [WARNING]  (300965) : All workers exited. Exiting... (0)
Nov 29 03:32:03 np0005539552 systemd[1]: libpod-2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186.scope: Deactivated successfully.
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 podman[301246]: 2025-11-29 08:32:03.694652178 +0000 UTC m=+0.064074675 container died 2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.699 233728 DEBUG nova.compute.manager [None req-1c495288-630f-4d7c-b825-82758077eca1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:03 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186-userdata-shm.mount: Deactivated successfully.
Nov 29 03:32:03 np0005539552 systemd[1]: var-lib-containers-storage-overlay-cbbbca3329ea61f260beb2bd3cc1555b2231ba95a498d361ba7ebc3013e085b0-merged.mount: Deactivated successfully.
Nov 29 03:32:03 np0005539552 podman[301246]: 2025-11-29 08:32:03.735637601 +0000 UTC m=+0.105060118 container cleanup 2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:03 np0005539552 systemd[1]: libpod-conmon-2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186.scope: Deactivated successfully.
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 podman[301283]: 2025-11-29 08:32:03.835294322 +0000 UTC m=+0.075607505 container remove 2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.843 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c79db6f1-f75e-4dc1-8678-8c72fb7f13ec]: (4, ('Sat Nov 29 08:32:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 (2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186)\n2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186\nSat Nov 29 08:32:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 (2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186)\n2bc513200b62e35a5214a06b28351bbd14db88aad2fb0db5fb3babbcb333e186\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.846 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d2acdecf-0e8b-40ea-91d2-44d501ab83bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.847 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c7496c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.850 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 kernel: tap72c7496c-e0: left promiscuous mode
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.862 233728 DEBUG nova.compute.manager [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.862 233728 DEBUG oslo_concurrency.lockutils [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.863 233728 DEBUG oslo_concurrency.lockutils [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.863 233728 DEBUG oslo_concurrency.lockutils [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.864 233728 DEBUG nova.compute.manager [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.864 233728 WARNING nova.compute.manager [req-532e4c67-b877-411a-b032-f41d26ac3f2d req-08e735b6-b69a-4cd2-999e-3d7b4260c0a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:32:03 np0005539552 nova_compute[233724]: 2025-11-29 08:32:03.867 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7986dc0e-41f4-40f4-8b8f-ee06ba384226]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.895 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb82db8-15ae-4305-8787-8c84e8ac4a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.896 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[99de4fa6-4c81-45e6-8f18-b762b20457f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.917 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4dd86c-48aa-4480-9233-c9455eb46f8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 814921, 'reachable_time': 36174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301300, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:03 np0005539552 systemd[1]: run-netns-ovnmeta\x2d72c7496c\x2dedc3\x2d4e9f\x2db7d4\x2d5c83cefc4119.mount: Deactivated successfully.
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.920 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:32:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:03.920 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6e1bb8-0eae-4f5b-8e8d-f7765cffc6ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:04.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:05 np0005539552 nova_compute[233724]: 2025-11-29 08:32:05.990 233728 DEBUG nova.compute.manager [req-4ec9ee5f-31e9-480a-8eaa-39f13251eb65 req-f97d9f9d-8fc9-4698-be68-ec13e386678e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:05 np0005539552 nova_compute[233724]: 2025-11-29 08:32:05.991 233728 DEBUG oslo_concurrency.lockutils [req-4ec9ee5f-31e9-480a-8eaa-39f13251eb65 req-f97d9f9d-8fc9-4698-be68-ec13e386678e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:05 np0005539552 nova_compute[233724]: 2025-11-29 08:32:05.992 233728 DEBUG oslo_concurrency.lockutils [req-4ec9ee5f-31e9-480a-8eaa-39f13251eb65 req-f97d9f9d-8fc9-4698-be68-ec13e386678e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:05 np0005539552 nova_compute[233724]: 2025-11-29 08:32:05.992 233728 DEBUG oslo_concurrency.lockutils [req-4ec9ee5f-31e9-480a-8eaa-39f13251eb65 req-f97d9f9d-8fc9-4698-be68-ec13e386678e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:05 np0005539552 nova_compute[233724]: 2025-11-29 08:32:05.993 233728 DEBUG nova.compute.manager [req-4ec9ee5f-31e9-480a-8eaa-39f13251eb65 req-f97d9f9d-8fc9-4698-be68-ec13e386678e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:05 np0005539552 nova_compute[233724]: 2025-11-29 08:32:05.993 233728 WARNING nova.compute.manager [req-4ec9ee5f-31e9-480a-8eaa-39f13251eb65 req-f97d9f9d-8fc9-4698-be68-ec13e386678e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:32:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:06.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:06 np0005539552 nova_compute[233724]: 2025-11-29 08:32:06.380 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539552 nova_compute[233724]: 2025-11-29 08:32:06.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539552 nova_compute[233724]: 2025-11-29 08:32:06.837 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539552 nova_compute[233724]: 2025-11-29 08:32:07.051 233728 INFO nova.compute.manager [None req-8e98e45f-6aff-4029-a8ee-2b74e7401ba4 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Get console output#033[00m
Nov 29 03:32:07 np0005539552 nova_compute[233724]: 2025-11-29 08:32:07.224 233728 INFO nova.compute.manager [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Resuming#033[00m
Nov 29 03:32:07 np0005539552 nova_compute[233724]: 2025-11-29 08:32:07.225 233728 DEBUG nova.objects.instance [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'flavor' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:07 np0005539552 nova_compute[233724]: 2025-11-29 08:32:07.261 233728 DEBUG oslo_concurrency.lockutils [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:07 np0005539552 nova_compute[233724]: 2025-11-29 08:32:07.261 233728 DEBUG oslo_concurrency.lockutils [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:07 np0005539552 nova_compute[233724]: 2025-11-29 08:32:07.261 233728 DEBUG nova.network.neutron [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:32:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:32:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:32:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:08.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:08 np0005539552 nova_compute[233724]: 2025-11-29 08:32:08.821 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.297 233728 DEBUG nova.network.neutron [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updating instance_info_cache with network_info: [{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.320 233728 DEBUG oslo_concurrency.lockutils [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.327 233728 DEBUG nova.virt.libvirt.vif [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-372991495',display_name='tempest-TestNetworkAdvancedServerOps-server-372991495',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-372991495',id=160,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBT1pv+IY7jQTScxIMh5OvDUzH662JY1o2XQaNpf6yf2TjBvg2UagoYUJ4t7fuQIr5Qy1J4DOxtOPLrI+zJHAFWB2CqZtlO0XcZKiDQ7qxEMZSSyPr2ai/8DAEe8syPj1w==',key_name='tempest-TestNetworkAdvancedServerOps-1533546898',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:44Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-8uo5g4ct',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:03Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=0728e9c2-5ec4-413b-aca6-7ee6cd645689,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.328 233728 DEBUG nova.network.os_vif_util [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.329 233728 DEBUG nova.network.os_vif_util [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.329 233728 DEBUG os_vif [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.330 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.331 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.331 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.335 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.335 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fb50269-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.336 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1fb50269-75, col_values=(('external_ids', {'iface-id': '1fb50269-75d2-4b10-bc29-bc1d14be7a5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:5e:c5', 'vm-uuid': '0728e9c2-5ec4-413b-aca6-7ee6cd645689'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.336 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.337 233728 INFO os_vif [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75')#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.369 233728 DEBUG nova.objects.instance [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:09 np0005539552 kernel: tap1fb50269-75: entered promiscuous mode
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.4579] manager: (tap1fb50269-75): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Nov 29 03:32:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:09Z|00751|binding|INFO|Claiming lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e for this chassis.
Nov 29 03:32:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:09Z|00752|binding|INFO|1fb50269-75d2-4b10-bc29-bc1d14be7a5e: Claiming fa:16:3e:8e:5e:c5 10.100.0.10
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.4794] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.4800] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.484 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:c5 10.100.0.10'], port_security=['fa:16:3e:8e:5e:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0728e9c2-5ec4-413b-aca6-7ee6cd645689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd914ffe7-3081-4101-a85e-37d351e89940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9fba1be-70a7-4a1c-89ef-e87d2756a5e7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1fb50269-75d2-4b10-bc29-bc1d14be7a5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.485 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e in datapath 72c7496c-edc3-4e9f-b7d4-5c83cefc4119 bound to our chassis#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.486 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72c7496c-edc3-4e9f-b7d4-5c83cefc4119#033[00m
Nov 29 03:32:09 np0005539552 systemd-machined[196379]: New machine qemu-75-instance-000000a0.
Nov 29 03:32:09 np0005539552 systemd-udevd[301323]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.500 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe5f8b7-bd14-4e4b-9920-623a22c6cd72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.501 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72c7496c-e1 in ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.503 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72c7496c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.503 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[28255ea9-6cda-4e14-a230-ddea7273a0fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.504 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bb05c87e-40b5-4e4b-b4f6-3224457e2e05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 systemd[1]: Started Virtual Machine qemu-75-instance-000000a0.
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.5182] device (tap1fb50269-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.5202] device (tap1fb50269-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.520 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[0571e678-d8de-4a8e-b461-425573bba8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.553 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[44dc823c-07e6-4ab5-ad22-3ff68e4f8523]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.590 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c55a4dbb-c207-46c2-b3a0-24eb8807b73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.6061] manager: (tap72c7496c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/330)
Nov 29 03:32:09 np0005539552 systemd-udevd[301326]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.608 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d773acd2-8b07-400b-9508-11e55b721a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.650 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4259c54f-546e-4272-8dd8-9e49a74490c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.653 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[992bcea3-bbeb-4352-86b5-06a7afd783e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.6806] device (tap72c7496c-e0): carrier: link connected
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.685 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.689 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e15adcc7-b6f8-4809-81ba-c0f0707eea2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.704 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:09Z|00753|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e ovn-installed in OVS
Nov 29 03:32:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:09Z|00754|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e up in Southbound
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.715 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.727 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0d0016-fc85-44a1-9f2b-36d7583bc497]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c7496c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:53:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817536, 'reachable_time': 27542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301355, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.754 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6f88d8ef-7602-49c0-8b46-8b22bfbb1f32]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:5356'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 817536, 'tstamp': 817536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301356, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.778 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b078d6c6-f4ac-4e71-9b8f-2083efb7f967]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72c7496c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:53:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817536, 'reachable_time': 27542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301357, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.822 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[db0f924e-0342-405e-aa81-02e610a23d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.892 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c31518-2dda-46b1-9cb2-b27e7293b3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.893 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c7496c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.894 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.895 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72c7496c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:09 np0005539552 kernel: tap72c7496c-e0: entered promiscuous mode
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.897 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 NetworkManager[48926]: <info>  [1764405129.8977] manager: (tap72c7496c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.901 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.910 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72c7496c-e0, col_values=(('external_ids', {'iface-id': '950fb150-ff30-4c09-b1fb-acc48272e896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.911 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:09Z|00755|binding|INFO|Releasing lport 950fb150-ff30-4c09-b1fb-acc48272e896 from this chassis (sb_readonly=0)
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.912 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.928 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.929 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[67142039-13a2-4b2d-b38a-8d74a1498bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.931 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-72c7496c-edc3-4e9f-b7d4-5c83cefc4119
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.pid.haproxy
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 72c7496c-edc3-4e9f-b7d4-5c83cefc4119
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:32:09 np0005539552 nova_compute[233724]: 2025-11-29 08:32:09.932 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:09 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:09.932 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'env', 'PROCESS_TAG=haproxy-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72c7496c-edc3-4e9f-b7d4-5c83cefc4119.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:10.072 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.235 233728 DEBUG nova.compute.manager [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.236 233728 DEBUG oslo_concurrency.lockutils [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.237 233728 DEBUG oslo_concurrency.lockutils [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.237 233728 DEBUG oslo_concurrency.lockutils [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.238 233728 DEBUG nova.compute.manager [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.238 233728 WARNING nova.compute.manager [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.239 233728 DEBUG nova.compute.manager [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.239 233728 DEBUG oslo_concurrency.lockutils [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.240 233728 DEBUG oslo_concurrency.lockutils [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.240 233728 DEBUG oslo_concurrency.lockutils [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.240 233728 DEBUG nova.compute.manager [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.241 233728 WARNING nova.compute.manager [req-4c88a3f4-2489-4115-bc72-c6f1a2bf898b req-db60e009-0a4a-4a37-9db0-994c4602d35c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.253 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 0728e9c2-5ec4-413b-aca6-7ee6cd645689 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.254 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405130.2534826, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.254 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Started (Lifecycle Event)#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.270 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.299 233728 DEBUG nova.compute.manager [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.300 233728 DEBUG nova.objects.instance [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.303 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.318 233728 INFO nova.virt.libvirt.driver [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance running successfully.#033[00m
Nov 29 03:32:10 np0005539552 virtqemud[233098]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.320 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.321 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405130.2576587, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.321 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.323 233728 DEBUG nova.virt.libvirt.guest [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.323 233728 DEBUG nova.compute.manager [None req-9130eff8-cfc7-4714-a7a2-ec01b86e090e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:10 np0005539552 podman[301433]: 2025-11-29 08:32:10.338064053 +0000 UTC m=+0.050605422 container create c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.348 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.351 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:10 np0005539552 systemd[1]: Started libpod-conmon-c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e.scope.
Nov 29 03:32:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539552 nova_compute[233724]: 2025-11-29 08:32:10.380 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:32:10 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:32:10 np0005539552 podman[301433]: 2025-11-29 08:32:10.312502675 +0000 UTC m=+0.025044074 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:32:10 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854e5927c9afdd52786e17497771b6afe8788259e3166126a2fb44c18fe1a625/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:10 np0005539552 podman[301433]: 2025-11-29 08:32:10.424955931 +0000 UTC m=+0.137497320 container init c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:32:10 np0005539552 podman[301433]: 2025-11-29 08:32:10.435006122 +0000 UTC m=+0.147547491 container start c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:32:10 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [NOTICE]   (301452) : New worker (301454) forked
Nov 29 03:32:10 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [NOTICE]   (301452) : Loading success.
Nov 29 03:32:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:10.482 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:32:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:10.483 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:11 np0005539552 nova_compute[233724]: 2025-11-29 08:32:11.550 233728 INFO nova.compute.manager [None req-2aa564b7-b7d9-43d0-9797-024747398ff8 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Get console output#033[00m
Nov 29 03:32:11 np0005539552 nova_compute[233724]: 2025-11-29 08:32:11.556 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:32:11 np0005539552 nova_compute[233724]: 2025-11-29 08:32:11.839 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:12.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.409 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.409 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.409 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.409 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.410 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.411 233728 INFO nova.compute.manager [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Terminating instance#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.411 233728 DEBUG nova.compute.manager [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:32:12 np0005539552 kernel: tap1fb50269-75 (unregistering): left promiscuous mode
Nov 29 03:32:12 np0005539552 NetworkManager[48926]: <info>  [1764405132.4605] device (tap1fb50269-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00756|binding|INFO|Releasing lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e from this chassis (sb_readonly=0)
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00757|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e down in Southbound
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00758|binding|INFO|Removing iface tap1fb50269-75 ovn-installed in OVS
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.480 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.486 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:c5 10.100.0.10'], port_security=['fa:16:3e:8e:5e:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0728e9c2-5ec4-413b-aca6-7ee6cd645689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd914ffe7-3081-4101-a85e-37d351e89940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9fba1be-70a7-4a1c-89ef-e87d2756a5e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1fb50269-75d2-4b10-bc29-bc1d14be7a5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.487 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e in datapath 72c7496c-edc3-4e9f-b7d4-5c83cefc4119 unbound from our chassis#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.489 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c7496c-edc3-4e9f-b7d4-5c83cefc4119, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.490 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3d00f577-daef-4917-82c3-8ddf4e0da37b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.490 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 namespace which is not needed anymore#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Nov 29 03:32:12 np0005539552 systemd-machined[196379]: Machine qemu-75-instance-000000a0 terminated.
Nov 29 03:32:12 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [NOTICE]   (301452) : haproxy version is 2.8.14-c23fe91
Nov 29 03:32:12 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [NOTICE]   (301452) : path to executable is /usr/sbin/haproxy
Nov 29 03:32:12 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [WARNING]  (301452) : Exiting Master process...
Nov 29 03:32:12 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [ALERT]    (301452) : Current worker (301454) exited with code 143 (Terminated)
Nov 29 03:32:12 np0005539552 neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119[301448]: [WARNING]  (301452) : All workers exited. Exiting... (0)
Nov 29 03:32:12 np0005539552 kernel: tap1fb50269-75: entered promiscuous mode
Nov 29 03:32:12 np0005539552 systemd[1]: libpod-c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e.scope: Deactivated successfully.
Nov 29 03:32:12 np0005539552 kernel: tap1fb50269-75 (unregistering): left promiscuous mode
Nov 29 03:32:12 np0005539552 conmon[301448]: conmon c6e843e9d0b2949a011f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e.scope/container/memory.events
Nov 29 03:32:12 np0005539552 NetworkManager[48926]: <info>  [1764405132.6394] manager: (tap1fb50269-75): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Nov 29 03:32:12 np0005539552 podman[301488]: 2025-11-29 08:32:12.640099701 +0000 UTC m=+0.051667540 container died c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00759|binding|INFO|Claiming lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e for this chassis.
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00760|binding|INFO|1fb50269-75d2-4b10-bc29-bc1d14be7a5e: Claiming fa:16:3e:8e:5e:c5 10.100.0.10
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.647 233728 DEBUG nova.compute.manager [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-changed-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.647 233728 DEBUG nova.compute.manager [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Refreshing instance network info cache due to event network-changed-1fb50269-75d2-4b10-bc29-bc1d14be7a5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.648 233728 DEBUG oslo_concurrency.lockutils [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.648 233728 DEBUG oslo_concurrency.lockutils [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.648 233728 DEBUG nova.network.neutron [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Refreshing network info cache for port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.649 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:c5 10.100.0.10'], port_security=['fa:16:3e:8e:5e:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0728e9c2-5ec4-413b-aca6-7ee6cd645689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd914ffe7-3081-4101-a85e-37d351e89940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9fba1be-70a7-4a1c-89ef-e87d2756a5e7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1fb50269-75d2-4b10-bc29-bc1d14be7a5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.661 233728 INFO nova.virt.libvirt.driver [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Instance destroyed successfully.#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.662 233728 DEBUG nova.objects.instance [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid 0728e9c2-5ec4-413b-aca6-7ee6cd645689 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00761|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e ovn-installed in OVS
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00762|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e up in Southbound
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00763|binding|INFO|Releasing lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e from this chassis (sb_readonly=1)
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00764|if_status|INFO|Dropped 3 log messages in last 208 seconds (most recently, 208 seconds ago) due to excessive rate
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00765|if_status|INFO|Not setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e down as sb is readonly
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00766|binding|INFO|Removing iface tap1fb50269-75 ovn-installed in OVS
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.667 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00767|binding|INFO|Releasing lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e from this chassis (sb_readonly=0)
Nov 29 03:32:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:12Z|00768|binding|INFO|Setting lport 1fb50269-75d2-4b10-bc29-bc1d14be7a5e down in Southbound
Nov 29 03:32:12 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.679 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:c5 10.100.0.10'], port_security=['fa:16:3e:8e:5e:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0728e9c2-5ec4-413b-aca6-7ee6cd645689', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd914ffe7-3081-4101-a85e-37d351e89940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9fba1be-70a7-4a1c-89ef-e87d2756a5e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=1fb50269-75d2-4b10-bc29-bc1d14be7a5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:12 np0005539552 systemd[1]: var-lib-containers-storage-overlay-854e5927c9afdd52786e17497771b6afe8788259e3166126a2fb44c18fe1a625-merged.mount: Deactivated successfully.
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.689 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 podman[301488]: 2025-11-29 08:32:12.698382999 +0000 UTC m=+0.109950808 container cleanup c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:12 np0005539552 systemd[1]: libpod-conmon-c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e.scope: Deactivated successfully.
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.748 233728 DEBUG nova.virt.libvirt.vif [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-372991495',display_name='tempest-TestNetworkAdvancedServerOps-server-372991495',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-372991495',id=160,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBT1pv+IY7jQTScxIMh5OvDUzH662JY1o2XQaNpf6yf2TjBvg2UagoYUJ4t7fuQIr5Qy1J4DOxtOPLrI+zJHAFWB2CqZtlO0XcZKiDQ7qxEMZSSyPr2ai/8DAEe8syPj1w==',key_name='tempest-TestNetworkAdvancedServerOps-1533546898',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:44Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-8uo5g4ct',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:32:10Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=0728e9c2-5ec4-413b-aca6-7ee6cd645689,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.749 233728 DEBUG nova.network.os_vif_util [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.749 233728 DEBUG nova.network.os_vif_util [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.750 233728 DEBUG os_vif [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.752 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.752 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fb50269-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:12 np0005539552 podman[301527]: 2025-11-29 08:32:12.754699124 +0000 UTC m=+0.036822831 container remove c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.755 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.757 233728 INFO os_vif [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:c5,bridge_name='br-int',has_traffic_filtering=True,id=1fb50269-75d2-4b10-bc29-bc1d14be7a5e,network=Network(72c7496c-edc3-4e9f-b7d4-5c83cefc4119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb50269-75')#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.761 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2231d78c-3fa8-4ade-a19d-88e80179ca98]: (4, ('Sat Nov 29 08:32:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 (c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e)\nc6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e\nSat Nov 29 08:32:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 (c6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e)\nc6e843e9d0b2949a011f5e8c391ef00bdb6312a36014e79bc7c206690d74a45e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.762 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb81520-7de6-4cec-9afe-2ffd5bc85099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.763 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72c7496c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:12 np0005539552 kernel: tap72c7496c-e0: left promiscuous mode
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.771 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[780d2a59-82d2-4c4a-a5b2-02949be5d5e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 nova_compute[233724]: 2025-11-29 08:32:12.781 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.785 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[404011ee-97ed-454c-9899-bf712db96cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.786 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6591c23a-515a-4dc0-ab48-090df8a1db7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.804 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[97b277f6-3965-4c2c-8fbe-6b8614ba0917]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 817527, 'reachable_time': 23789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301558, 'error': None, 'target': 'ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.807 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72c7496c-edc3-4e9f-b7d4-5c83cefc4119 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.807 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[c7353bcf-0a28-4a26-9c95-cfc5f78e07f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.808 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e in datapath 72c7496c-edc3-4e9f-b7d4-5c83cefc4119 unbound from our chassis#033[00m
Nov 29 03:32:12 np0005539552 systemd[1]: run-netns-ovnmeta\x2d72c7496c\x2dedc3\x2d4e9f\x2db7d4\x2d5c83cefc4119.mount: Deactivated successfully.
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.810 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c7496c-edc3-4e9f-b7d4-5c83cefc4119, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.811 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e7a228-7488-4654-9be1-cbfe9dbe5f92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.811 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e in datapath 72c7496c-edc3-4e9f-b7d4-5c83cefc4119 unbound from our chassis#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.813 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72c7496c-edc3-4e9f-b7d4-5c83cefc4119, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:12.813 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a683f354-1f5d-4896-91d1-8e1744555ae1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:13.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.787 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405118.7856095, 101e7b80-d529-4f2a-87df-44512ead5b00 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.787 233728 INFO nova.compute.manager [-] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.806 233728 DEBUG nova.compute.manager [None req-8d9f4002-fbec-439c-ab6c-5d22b466d1e2 - - - - - -] [instance: 101e7b80-d529-4f2a-87df-44512ead5b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.891 233728 DEBUG nova.network.neutron [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updated VIF entry in instance network info cache for port 1fb50269-75d2-4b10-bc29-bc1d14be7a5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.892 233728 DEBUG nova.network.neutron [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updating instance_info_cache with network_info: [{"id": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "address": "fa:16:3e:8e:5e:c5", "network": {"id": "72c7496c-edc3-4e9f-b7d4-5c83cefc4119", "bridge": "br-int", "label": "tempest-network-smoke--2112416887", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb50269-75", "ovs_interfaceid": "1fb50269-75d2-4b10-bc29-bc1d14be7a5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.914 233728 DEBUG oslo_concurrency.lockutils [req-1fddf883-2753-4628-812b-e185e1365d34 req-1b86fb01-ec77-458b-81ca-c9c064710ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-0728e9c2-5ec4-413b-aca6-7ee6cd645689" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.926 233728 INFO nova.virt.libvirt.driver [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Deleting instance files /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689_del#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.926 233728 INFO nova.virt.libvirt.driver [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Deletion of /var/lib/nova/instances/0728e9c2-5ec4-413b-aca6-7ee6cd645689_del complete#033[00m
Nov 29 03:32:13 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.977 233728 INFO nova.compute.manager [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.978 233728 DEBUG oslo.service.loopingcall [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.979 233728 DEBUG nova.compute.manager [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:32:13 np0005539552 nova_compute[233724]: 2025-11-29 08:32:13.979 233728 DEBUG nova.network.neutron [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:32:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.534 233728 DEBUG nova.network.neutron [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.554 233728 INFO nova.compute.manager [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Took 0.57 seconds to deallocate network for instance.#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.611 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.611 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.679 233728 DEBUG oslo_concurrency.processutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.775 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.776 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.777 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.777 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.778 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.778 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.779 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.780 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.780 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.781 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.781 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.781 233728 WARNING nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.782 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.782 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.783 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.783 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.784 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.784 233728 WARNING nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.785 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.785 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.786 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.786 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.786 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.787 233728 WARNING nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.787 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.788 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.788 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.789 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.789 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.790 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-unplugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.790 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.791 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.791 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.792 233728 DEBUG oslo_concurrency.lockutils [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.792 233728 DEBUG nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] No waiting events found dispatching network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:14 np0005539552 nova_compute[233724]: 2025-11-29 08:32:14.793 233728 WARNING nova.compute.manager [req-b8cc713f-0dc1-498f-b463-8db76a6ec8f9 req-e1f7a22d-7037-421d-a6a1-a099fa5027a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received unexpected event network-vif-plugged-1fb50269-75d2-4b10-bc29-bc1d14be7a5e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1070657420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:15 np0005539552 nova_compute[233724]: 2025-11-29 08:32:15.158 233728 DEBUG oslo_concurrency.processutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:15 np0005539552 nova_compute[233724]: 2025-11-29 08:32:15.162 233728 DEBUG nova.compute.provider_tree [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:15 np0005539552 nova_compute[233724]: 2025-11-29 08:32:15.180 233728 DEBUG nova.scheduler.client.report [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:15 np0005539552 nova_compute[233724]: 2025-11-29 08:32:15.208 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:15 np0005539552 nova_compute[233724]: 2025-11-29 08:32:15.225 233728 INFO nova.scheduler.client.report [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance 0728e9c2-5ec4-413b-aca6-7ee6cd645689#033[00m
Nov 29 03:32:15 np0005539552 nova_compute[233724]: 2025-11-29 08:32:15.299 233728 DEBUG oslo_concurrency.lockutils [None req-e374fbd0-adbc-458a-9d30-3c7ff3905ace fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "0728e9c2-5ec4-413b-aca6-7ee6cd645689" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:15.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:16 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3406432144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:16 np0005539552 nova_compute[233724]: 2025-11-29 08:32:16.841 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:16 np0005539552 nova_compute[233724]: 2025-11-29 08:32:16.845 233728 DEBUG nova.compute.manager [req-aed65457-8bca-4853-9625-525680f41933 req-ab1f28c2-528b-4cda-a0e9-7685541b3a06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Received event network-vif-deleted-1fb50269-75d2-4b10-bc29-bc1d14be7a5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:17.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:17 np0005539552 nova_compute[233724]: 2025-11-29 08:32:17.754 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:18 np0005539552 nova_compute[233724]: 2025-11-29 08:32:18.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:18 np0005539552 nova_compute[233724]: 2025-11-29 08:32:18.790 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:19.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:20.638 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:20.639 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:20.639 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:21.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:21 np0005539552 nova_compute[233724]: 2025-11-29 08:32:21.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:22 np0005539552 nova_compute[233724]: 2025-11-29 08:32:22.756 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:23 np0005539552 podman[301749]: 2025-11-29 08:32:23.526555135 +0000 UTC m=+0.068402941 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 03:32:23 np0005539552 podman[301750]: 2025-11-29 08:32:23.543469431 +0000 UTC m=+0.074744012 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:32:23 np0005539552 podman[301751]: 2025-11-29 08:32:23.561365472 +0000 UTC m=+0.094264207 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:32:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:23.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:26.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:26 np0005539552 nova_compute[233724]: 2025-11-29 08:32:26.845 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:27 np0005539552 nova_compute[233724]: 2025-11-29 08:32:27.660 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405132.659014, 0728e9c2-5ec4-413b-aca6-7ee6cd645689 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:27 np0005539552 nova_compute[233724]: 2025-11-29 08:32:27.661 233728 INFO nova.compute.manager [-] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:32:27 np0005539552 nova_compute[233724]: 2025-11-29 08:32:27.685 233728 DEBUG nova.compute.manager [None req-202594d5-08de-4141-b95a-d9b5d0b9a0fc - - - - - -] [instance: 0728e9c2-5ec4-413b-aca6-7ee6cd645689] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:27 np0005539552 nova_compute[233724]: 2025-11-29 08:32:27.757 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:28.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:29.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:29 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 03:32:29 np0005539552 nova_compute[233724]: 2025-11-29 08:32:29.883 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:29 np0005539552 nova_compute[233724]: 2025-11-29 08:32:29.883 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:29 np0005539552 nova_compute[233724]: 2025-11-29 08:32:29.902 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:32:29 np0005539552 nova_compute[233724]: 2025-11-29 08:32:29.997 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:29 np0005539552 nova_compute[233724]: 2025-11-29 08:32:29.998 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.008 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.008 233728 INFO nova.compute.claims [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.156 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:30 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1933983418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.642 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.653 233728 DEBUG nova.compute.provider_tree [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.680 233728 DEBUG nova.scheduler.client.report [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.730 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.731 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.779 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.779 233728 DEBUG nova.network.neutron [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.803 233728 INFO nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.825 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.944 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.947 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.948 233728 INFO nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Creating image(s)#033[00m
Nov 29 03:32:30 np0005539552 nova_compute[233724]: 2025-11-29 08:32:30.980 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.013 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.044 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.048 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.089 233728 DEBUG nova.policy [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5b0953fb7cc415fb26cf4ffdd5908c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.132 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.133 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.134 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.134 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.162 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.166 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:31 np0005539552 nova_compute[233724]: 2025-11-29 08:32:31.847 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:32 np0005539552 nova_compute[233724]: 2025-11-29 08:32:32.522 233728 DEBUG nova.network.neutron [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Successfully created port: 5369324b-4a12-4cff-807c-444de53025fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:32:32 np0005539552 nova_compute[233724]: 2025-11-29 08:32:32.532 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:32 np0005539552 nova_compute[233724]: 2025-11-29 08:32:32.635 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] resizing rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:32:32 np0005539552 nova_compute[233724]: 2025-11-29 08:32:32.922 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.067 233728 DEBUG nova.objects.instance [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.094 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.095 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Ensure instance console log exists: /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.096 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.096 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.096 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.913 233728 DEBUG nova.network.neutron [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Successfully updated port: 5369324b-4a12-4cff-807c-444de53025fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.930 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.930 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:33 np0005539552 nova_compute[233724]: 2025-11-29 08:32:33.931 233728 DEBUG nova.network.neutron [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:32:34 np0005539552 nova_compute[233724]: 2025-11-29 08:32:34.049 233728 DEBUG nova.compute.manager [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:34 np0005539552 nova_compute[233724]: 2025-11-29 08:32:34.050 233728 DEBUG nova.compute.manager [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:34 np0005539552 nova_compute[233724]: 2025-11-29 08:32:34.051 233728 DEBUG oslo_concurrency.lockutils [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:34 np0005539552 nova_compute[233724]: 2025-11-29 08:32:34.129 233728 DEBUG nova.network.neutron [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:32:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:34.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.386 233728 DEBUG nova.network.neutron [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.416 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.416 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance network_info: |[{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.417 233728 DEBUG oslo_concurrency.lockutils [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.417 233728 DEBUG nova.network.neutron [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.419 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Start _get_guest_xml network_info=[{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.424 233728 WARNING nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.431 233728 DEBUG nova.virt.libvirt.host [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.432 233728 DEBUG nova.virt.libvirt.host [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.436 233728 DEBUG nova.virt.libvirt.host [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.436 233728 DEBUG nova.virt.libvirt.host [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.437 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.437 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.438 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.438 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.438 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.438 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.439 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.439 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.439 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.439 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.439 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.440 233728 DEBUG nova.virt.hardware [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.442 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:35.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:35 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4205495701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:35 np0005539552 nova_compute[233724]: 2025-11-29 08:32:35.976 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.005 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.009 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:36.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1508818477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.478 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.480 233728 DEBUG nova.virt.libvirt.vif [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.480 233728 DEBUG nova.network.os_vif_util [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.481 233728 DEBUG nova.network.os_vif_util [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.482 233728 DEBUG nova.objects.instance [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.496 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <uuid>8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a</uuid>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <name>instance-000000a4</name>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:name>multiattach-server-0</nova:name>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:32:35</nova:creationTime>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <nova:port uuid="5369324b-4a12-4cff-807c-444de53025fa">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <entry name="serial">8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a</entry>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <entry name="uuid">8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a</entry>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:a3:51:12"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <target dev="tap5369324b-4a"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/console.log" append="off"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:32:36 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:32:36 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:32:36 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:32:36 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.497 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Preparing to wait for external event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.497 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.498 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.498 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.499 233728 DEBUG nova.virt.libvirt.vif [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.499 233728 DEBUG nova.network.os_vif_util [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.499 233728 DEBUG nova.network.os_vif_util [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.500 233728 DEBUG os_vif [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.500 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.501 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.503 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.503 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5369324b-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.503 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5369324b-4a, col_values=(('external_ids', {'iface-id': '5369324b-4a12-4cff-807c-444de53025fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:51:12', 'vm-uuid': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:36 np0005539552 NetworkManager[48926]: <info>  [1764405156.5062] manager: (tap5369324b-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.515 233728 INFO os_vif [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a')#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.790 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.791 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.791 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:a3:51:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.791 233728 INFO nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Using config drive#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.812 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.848 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.961 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.962 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.962 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:32:36 np0005539552 nova_compute[233724]: 2025-11-29 08:32:36.963 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3797317842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.432 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.451 233728 INFO nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Creating config drive at /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/disk.config#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.458 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgb55u7r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:37.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.607 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgb55u7r" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.648 233728 DEBUG nova.storage.rbd_utils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.654 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/disk.config 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.759 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.760 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.913 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.914 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4163MB free_disk=20.785099029541016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.914 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.914 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.949 233728 DEBUG nova.network.neutron [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.949 233728 DEBUG nova.network.neutron [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.963 233728 DEBUG oslo_concurrency.lockutils [req-8e4ddf34-09fb-4be0-b5df-87f1377b40e5 req-5b3a5f21-3342-490d-b685-ee63423c284b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.975 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.976 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:32:37 np0005539552 nova_compute[233724]: 2025-11-29 08:32:37.976 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.020 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.207 233728 DEBUG oslo_concurrency.processutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/disk.config 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.208 233728 INFO nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Deleting local config drive /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:32:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:38 np0005539552 kernel: tap5369324b-4a: entered promiscuous mode
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.279 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.2818] manager: (tap5369324b-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Nov 29 03:32:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:38Z|00769|binding|INFO|Claiming lport 5369324b-4a12-4cff-807c-444de53025fa for this chassis.
Nov 29 03:32:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:38Z|00770|binding|INFO|5369324b-4a12-4cff-807c-444de53025fa: Claiming fa:16:3e:a3:51:12 10.100.0.11
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.288 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.303 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.3248] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.3258] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.323 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.330 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:51:12 10.100.0.11'], port_security=['fa:16:3e:a3:51:12 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5369324b-4a12-4cff-807c-444de53025fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.332 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5369324b-4a12-4cff-807c-444de53025fa in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.334 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:32:38 np0005539552 systemd-machined[196379]: New machine qemu-76-instance-000000a4.
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.352 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[49754276-a049-4ecc-9508-156d77d3a4f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.355 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped50ff83-51 in ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.358 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped50ff83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.358 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[506d8637-d2e9-4f6a-921e-58099255115b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 systemd[1]: Started Virtual Machine qemu-76-instance-000000a4.
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.359 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f49e37eb-0a0e-4686-aa21-a1ea3605fe4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 systemd-udevd[302265]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.380 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6ad40a-456f-4881-a13f-1eae57600dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.3844] device (tap5369324b-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.3852] device (tap5369324b-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:32:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:38.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.421 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[68aac4e1-0830-4f5f-b26d-d0480488d8e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3719184743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.459 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d5520c-c2ce-459c-addf-5c8770777c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.4709] manager: (taped50ff83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/337)
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.470 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d86a230-1fa3-4fc3-81a3-ce95b11e9a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 systemd-udevd[302267]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.495 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.511 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.523 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:38Z|00771|binding|INFO|Setting lport 5369324b-4a12-4cff-807c-444de53025fa ovn-installed in OVS
Nov 29 03:32:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:38Z|00772|binding|INFO|Setting lport 5369324b-4a12-4cff-807c-444de53025fa up in Southbound
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.539 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[05d9b963-6005-441e-88c4-b40ac5769682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.539 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.541 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.549 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1ee6b6-2090-4bc4-9c7e-7512651d5254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.5814] device (taped50ff83-50): carrier: link connected
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.582 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.583 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.590 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bcce727b-0e3c-4e09-adea-043c5ac10113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.613 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf73079-e33e-4cb1-8b33-1043911f91fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820426, 'reachable_time': 40148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302300, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.634 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e452eef6-b7d0-4f90-a0a2-3ba01ddf657d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:60f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 820426, 'tstamp': 820426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302301, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.658 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[daf4d5fe-12d7-484e-8f14-bf99f9967981]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820426, 'reachable_time': 40148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302302, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.689 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0e44c26a-d9a5-47e4-a93c-1c8a479a1494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.767 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b87b22a9-0a35-4b73-8cc3-b41433018d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.768 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.769 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.769 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:38 np0005539552 NetworkManager[48926]: <info>  [1764405158.7722] manager: (taped50ff83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 29 03:32:38 np0005539552 kernel: taped50ff83-50: entered promiscuous mode
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.775 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:38Z|00773|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.771 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 nova_compute[233724]: 2025-11-29 08:32:38.795 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.797 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.798 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b277bb-30a9-40f3-ab3d-b083ba8894a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.799 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID ed50ff83-51d1-4b35-b85c-1cbe6fb812c6
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:32:38 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:32:38.801 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'env', 'PROCESS_TAG=haproxy-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:32:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:32:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2502218910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:32:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:32:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2502218910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:32:39 np0005539552 podman[302335]: 2025-11-29 08:32:39.173902938 +0000 UTC m=+0.030875762 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:32:39 np0005539552 podman[302335]: 2025-11-29 08:32:39.270341353 +0000 UTC m=+0.127314147 container create 2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:32:39 np0005539552 systemd[1]: Started libpod-conmon-2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429.scope.
Nov 29 03:32:39 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:32:39 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e89b1b6c42f4e70efd0d9366dd86ac5a1f7e82e2b64373cd151db32e57a6513/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:39 np0005539552 podman[302335]: 2025-11-29 08:32:39.392785467 +0000 UTC m=+0.249758301 container init 2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:32:39 np0005539552 podman[302335]: 2025-11-29 08:32:39.404942134 +0000 UTC m=+0.261914958 container start 2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:39 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [NOTICE]   (302390) : New worker (302392) forked
Nov 29 03:32:39 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [NOTICE]   (302390) : Loading success.
Nov 29 03:32:39 np0005539552 nova_compute[233724]: 2025-11-29 08:32:39.553 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405159.5529516, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:39 np0005539552 nova_compute[233724]: 2025-11-29 08:32:39.554 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:32:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:39 np0005539552 nova_compute[233724]: 2025-11-29 08:32:39.976 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:40 np0005539552 nova_compute[233724]: 2025-11-29 08:32:40.278 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:40 np0005539552 nova_compute[233724]: 2025-11-29 08:32:40.284 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405159.5538185, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:40 np0005539552 nova_compute[233724]: 2025-11-29 08:32:40.285 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:32:40 np0005539552 nova_compute[233724]: 2025-11-29 08:32:40.305 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:40 np0005539552 nova_compute[233724]: 2025-11-29 08:32:40.310 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:40 np0005539552 nova_compute[233724]: 2025-11-29 08:32:40.349 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:32:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:40.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:41 np0005539552 nova_compute[233724]: 2025-11-29 08:32:41.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:41 np0005539552 nova_compute[233724]: 2025-11-29 08:32:41.850 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:43.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.373 233728 DEBUG nova.compute.manager [req-1e73c23a-d68b-4202-b596-c57fdbe04df1 req-2b267c4d-6df5-4ae2-9972-ca077ee7b2df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.375 233728 DEBUG oslo_concurrency.lockutils [req-1e73c23a-d68b-4202-b596-c57fdbe04df1 req-2b267c4d-6df5-4ae2-9972-ca077ee7b2df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.376 233728 DEBUG oslo_concurrency.lockutils [req-1e73c23a-d68b-4202-b596-c57fdbe04df1 req-2b267c4d-6df5-4ae2-9972-ca077ee7b2df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.376 233728 DEBUG oslo_concurrency.lockutils [req-1e73c23a-d68b-4202-b596-c57fdbe04df1 req-2b267c4d-6df5-4ae2-9972-ca077ee7b2df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.376 233728 DEBUG nova.compute.manager [req-1e73c23a-d68b-4202-b596-c57fdbe04df1 req-2b267c4d-6df5-4ae2-9972-ca077ee7b2df 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Processing event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.377 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.381 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405164.381493, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.381 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.384 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.387 233728 INFO nova.virt.libvirt.driver [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance spawned successfully.#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.387 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:32:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.438 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.445 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.445 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.446 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.446 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.446 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.447 233728 DEBUG nova.virt.libvirt.driver [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.451 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.534 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.583 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.584 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.584 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.584 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.584 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.584 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.755 233728 INFO nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Took 13.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:32:44 np0005539552 nova_compute[233724]: 2025-11-29 08:32:44.755 233728 DEBUG nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:45.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:45 np0005539552 nova_compute[233724]: 2025-11-29 08:32:45.850 233728 INFO nova.compute.manager [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Took 15.89 seconds to build instance.#033[00m
Nov 29 03:32:45 np0005539552 nova_compute[233724]: 2025-11-29 08:32:45.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.020 233728 DEBUG oslo_concurrency.lockutils [None req-d80ad57f-4dc2-4cc2-bd55-e1326a04766c c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:46.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.511 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.568 233728 DEBUG nova.compute.manager [req-75002bbe-9b39-4e44-ac4e-d3162fbf0165 req-115194bf-b0c7-4c16-b97c-1816c3fb0754 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.569 233728 DEBUG oslo_concurrency.lockutils [req-75002bbe-9b39-4e44-ac4e-d3162fbf0165 req-115194bf-b0c7-4c16-b97c-1816c3fb0754 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.569 233728 DEBUG oslo_concurrency.lockutils [req-75002bbe-9b39-4e44-ac4e-d3162fbf0165 req-115194bf-b0c7-4c16-b97c-1816c3fb0754 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.569 233728 DEBUG oslo_concurrency.lockutils [req-75002bbe-9b39-4e44-ac4e-d3162fbf0165 req-115194bf-b0c7-4c16-b97c-1816c3fb0754 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.569 233728 DEBUG nova.compute.manager [req-75002bbe-9b39-4e44-ac4e-d3162fbf0165 req-115194bf-b0c7-4c16-b97c-1816c3fb0754 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.569 233728 WARNING nova.compute.manager [req-75002bbe-9b39-4e44-ac4e-d3162fbf0165 req-115194bf-b0c7-4c16-b97c-1816c3fb0754 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state None.#033[00m
Nov 29 03:32:46 np0005539552 nova_compute[233724]: 2025-11-29 08:32:46.853 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:47.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:47 np0005539552 nova_compute[233724]: 2025-11-29 08:32:47.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:48.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:49.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:49 np0005539552 nova_compute[233724]: 2025-11-29 08:32:49.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:49 np0005539552 nova_compute[233724]: 2025-11-29 08:32:49.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:32:49 np0005539552 nova_compute[233724]: 2025-11-29 08:32:49.947 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:32:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:50.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:51 np0005539552 nova_compute[233724]: 2025-11-29 08:32:51.514 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:51.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:51 np0005539552 nova_compute[233724]: 2025-11-29 08:32:51.856 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:52.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:53 np0005539552 nova_compute[233724]: 2025-11-29 08:32:53.448 233728 DEBUG nova.compute.manager [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:53 np0005539552 nova_compute[233724]: 2025-11-29 08:32:53.449 233728 DEBUG nova.compute.manager [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:53 np0005539552 nova_compute[233724]: 2025-11-29 08:32:53.449 233728 DEBUG oslo_concurrency.lockutils [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:53 np0005539552 nova_compute[233724]: 2025-11-29 08:32:53.449 233728 DEBUG oslo_concurrency.lockutils [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:53 np0005539552 nova_compute[233724]: 2025-11-29 08:32:53.450 233728 DEBUG nova.network.neutron [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:53.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:53 np0005539552 nova_compute[233724]: 2025-11-29 08:32:53.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:54 np0005539552 podman[302471]: 2025-11-29 08:32:54.001836483 +0000 UTC m=+0.074481545 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 03:32:54 np0005539552 podman[302470]: 2025-11-29 08:32:54.006520539 +0000 UTC m=+0.079103989 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:32:54 np0005539552 podman[302472]: 2025-11-29 08:32:54.036590318 +0000 UTC m=+0.109134547 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:32:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:55.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:55 np0005539552 nova_compute[233724]: 2025-11-29 08:32:55.941 233728 DEBUG nova.network.neutron [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:55 np0005539552 nova_compute[233724]: 2025-11-29 08:32:55.941 233728 DEBUG nova.network.neutron [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:55 np0005539552 nova_compute[233724]: 2025-11-29 08:32:55.974 233728 DEBUG oslo_concurrency.lockutils [req-f83470e6-de8e-4526-bd03-4e01e29765bd req-c0f4342d-443e-4078-a831-d5e670a07461 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:56 np0005539552 nova_compute[233724]: 2025-11-29 08:32:56.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:56 np0005539552 nova_compute[233724]: 2025-11-29 08:32:56.862 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:57.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.001 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.001 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.045 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.236 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.237 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.248 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.249 233728 INFO nova.compute.claims [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:32:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:59Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:51:12 10.100.0.11
Nov 29 03:32:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:32:59Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:51:12 10.100.0.11
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.455 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:32:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1505205574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.989 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:59 np0005539552 nova_compute[233724]: 2025-11-29 08:32:59.999 233728 DEBUG nova.compute.provider_tree [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.374 233728 DEBUG nova.scheduler.client.report [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.402 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.403 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:33:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.439 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.439 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.473 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.483 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.515 233728 INFO nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.574 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.585 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.585 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.593 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.593 233728 INFO nova.compute.claims [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.706 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.707 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.708 233728 INFO nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Creating image(s)#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.733 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.768 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.804 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.809 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.881 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.882 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.882 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.883 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.915 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.919 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:00 np0005539552 nova_compute[233724]: 2025-11-29 08:33:00.973 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433576359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.450 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.455 233728 DEBUG nova.compute.provider_tree [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.477 233728 DEBUG nova.scheduler.client.report [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.503 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.503 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.519 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.564 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.564 233728 DEBUG nova.network.neutron [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.582 233728 INFO nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.602 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:33:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:01.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.772 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.773 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.774 233728 INFO nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Creating image(s)#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.925 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.951 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.976 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:01 np0005539552 nova_compute[233724]: 2025-11-29 08:33:01.980 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.005 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.011 233728 DEBUG nova.policy [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b0fe4d78df74554a3a5875ab629d59c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1981e9617628491f938ef0ef01c061c5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.046 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.047 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.048 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.048 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.075 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.078 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:02.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.468 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.596 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] resizing rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.831 233728 DEBUG nova.objects.instance [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'migration_context' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.858 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.859 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Ensure instance console log exists: /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.859 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.860 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.860 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.861 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.865 233728 WARNING nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.869 233728 DEBUG nova.virt.libvirt.host [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.870 233728 DEBUG nova.virt.libvirt.host [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.872 233728 DEBUG nova.virt.libvirt.host [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.873 233728 DEBUG nova.virt.libvirt.host [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.874 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.874 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.874 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.875 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.875 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.875 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.875 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.876 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.876 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.876 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.877 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.877 233728 DEBUG nova.virt.hardware [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:02 np0005539552 nova_compute[233724]: 2025-11-29 08:33:02.880 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.024 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.946s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.119 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] resizing rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:33:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2555111945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.411 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.447 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.452 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.549 233728 DEBUG nova.network.neutron [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Successfully created port: aa2f5192-1c39-497e-8a7b-31d50bd48eb7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:33:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:03.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/512138453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.947 233728 DEBUG nova.objects.instance [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.954 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.955 233728 DEBUG nova.objects.instance [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'pci_devices' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.961 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.961 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Ensure instance console log exists: /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.962 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.962 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.962 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:03 np0005539552 nova_compute[233724]: 2025-11-29 08:33:03.969 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <uuid>b116fe85-6509-4516-bc73-6cd5fd20ecc2</uuid>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <name>instance-000000a7</name>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerShowV257Test-server-600349588</nova:name>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:33:02</nova:creationTime>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:user uuid="7147127ad2c248a6977704a1850eb832">tempest-ServerShowV257Test-94781273-project-member</nova:user>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <nova:project uuid="61c2a7b8d2c741a1af85aefdb0eb7132">tempest-ServerShowV257Test-94781273</nova:project>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <entry name="serial">b116fe85-6509-4516-bc73-6cd5fd20ecc2</entry>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <entry name="uuid">b116fe85-6509-4516-bc73-6cd5fd20ecc2</entry>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/console.log" append="off"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:33:03 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:33:03 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:33:03 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:33:03 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.030 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.031 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.032 233728 INFO nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Using config drive#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.064 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:04.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.498 233728 INFO nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Creating config drive at /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.510 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ythf8_l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.661 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ythf8_l" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.709 233728 DEBUG nova.storage.rbd_utils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.715 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.984 233728 DEBUG oslo_concurrency.processutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:04 np0005539552 nova_compute[233724]: 2025-11-29 08:33:04.985 233728 INFO nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deleting local config drive /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:05 np0005539552 systemd-machined[196379]: New machine qemu-77-instance-000000a7.
Nov 29 03:33:05 np0005539552 systemd[1]: Started Virtual Machine qemu-77-instance-000000a7.
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.208 233728 DEBUG nova.network.neutron [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Successfully updated port: aa2f5192-1c39-497e-8a7b-31d50bd48eb7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.340 233728 DEBUG nova.compute.manager [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-changed-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.340 233728 DEBUG nova.compute.manager [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Refreshing instance network info cache due to event network-changed-aa2f5192-1c39-497e-8a7b-31d50bd48eb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.341 233728 DEBUG oslo_concurrency.lockutils [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.342 233728 DEBUG oslo_concurrency.lockutils [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.342 233728 DEBUG nova.network.neutron [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Refreshing network info cache for port aa2f5192-1c39-497e-8a7b-31d50bd48eb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.402 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.578 233728 DEBUG nova.network.neutron [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:05.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.791 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405185.7909877, b116fe85-6509-4516-bc73-6cd5fd20ecc2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.792 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.795 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.795 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.799 233728 INFO nova.virt.libvirt.driver [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance spawned successfully.#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.800 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.816 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.819 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.827 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.827 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.828 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.828 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.829 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.829 233728 DEBUG nova.virt.libvirt.driver [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.849 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.850 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405185.794215, b116fe85-6509-4516-bc73-6cd5fd20ecc2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.850 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.871 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.874 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.895 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.905 233728 INFO nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Took 5.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.906 233728 DEBUG nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.964 233728 INFO nova.compute.manager [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Took 6.79 seconds to build instance.#033[00m
Nov 29 03:33:05 np0005539552 nova_compute[233724]: 2025-11-29 08:33:05.984 233728 DEBUG oslo_concurrency.lockutils [None req-78b1e6b5-d97c-497a-9d26-c5dde0e3ca82 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.167 233728 DEBUG nova.network.neutron [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.325 233728 DEBUG oslo_concurrency.lockutils [req-6c1f741c-6201-4138-8f68-a32af06c99f8 req-08c795dd-875b-4d66-a4f8-195af33c264e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.326 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquired lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.326 233728 DEBUG nova.network.neutron [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.521 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.535 233728 DEBUG nova.network.neutron [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:06 np0005539552 nova_compute[233724]: 2025-11-29 08:33:06.864 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:07.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.891 233728 DEBUG nova.network.neutron [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating instance_info_cache with network_info: [{"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.912 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Releasing lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.913 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Instance network_info: |[{"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.915 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Start _get_guest_xml network_info=[{"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.920 233728 WARNING nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.924 233728 DEBUG nova.virt.libvirt.host [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.924 233728 DEBUG nova.virt.libvirt.host [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.931 233728 DEBUG nova.virt.libvirt.host [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.931 233728 DEBUG nova.virt.libvirt.host [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.932 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.933 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.933 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.933 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.934 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.934 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.934 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.935 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.935 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.935 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.936 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.936 233728 DEBUG nova.virt.hardware [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:07 np0005539552 nova_compute[233724]: 2025-11-29 08:33:07.939 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.113 233728 INFO nova.compute.manager [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Rebuilding instance#033[00m
Nov 29 03:33:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3179831328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.447 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.477 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.482 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.523 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.547 233728 DEBUG nova.compute.manager [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.593 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'pci_requests' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.608 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'pci_devices' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.624 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'resources' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.636 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'migration_context' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.649 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:33:08 np0005539552 nova_compute[233724]: 2025-11-29 08:33:08.655 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:33:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1742888457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.105 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.107 233728 DEBUG nova.virt.libvirt.vif [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1071799825',display_name='tempest-AttachVolumeTestJSON-server-1071799825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1071799825',id=168,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8z7rrPeKOEKI612gpBMVEvIGpH3EMiF+hWdu6tWRiUAF9IFXNb+B4J5+W6qT7uDPKVKxau5gwrOF36u0kpS+8En2biuDD+O0UgFddmbT40+04wXSPyzWQWr4KYgMABrA==',key_name='tempest-keypair-1751719941',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-zqu0onmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=64f6896c-17f2-4ceb-98b9-50a541c98b7b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.107 233728 DEBUG nova.network.os_vif_util [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.109 233728 DEBUG nova.network.os_vif_util [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.111 233728 DEBUG nova.objects.instance [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.133 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <uuid>64f6896c-17f2-4ceb-98b9-50a541c98b7b</uuid>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <name>instance-000000a8</name>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:name>tempest-AttachVolumeTestJSON-server-1071799825</nova:name>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:33:07</nova:creationTime>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:user uuid="5b0fe4d78df74554a3a5875ab629d59c">tempest-AttachVolumeTestJSON-169198681-project-member</nova:user>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:project uuid="1981e9617628491f938ef0ef01c061c5">tempest-AttachVolumeTestJSON-169198681</nova:project>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <nova:port uuid="aa2f5192-1c39-497e-8a7b-31d50bd48eb7">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <entry name="serial">64f6896c-17f2-4ceb-98b9-50a541c98b7b</entry>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <entry name="uuid">64f6896c-17f2-4ceb-98b9-50a541c98b7b</entry>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk.config">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:43:dd:70"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <target dev="tapaa2f5192-1c"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/console.log" append="off"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:33:09 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:33:09 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:33:09 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:33:09 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.135 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Preparing to wait for external event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.135 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.136 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.136 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.137 233728 DEBUG nova.virt.libvirt.vif [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1071799825',display_name='tempest-AttachVolumeTestJSON-server-1071799825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1071799825',id=168,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8z7rrPeKOEKI612gpBMVEvIGpH3EMiF+hWdu6tWRiUAF9IFXNb+B4J5+W6qT7uDPKVKxau5gwrOF36u0kpS+8En2biuDD+O0UgFddmbT40+04wXSPyzWQWr4KYgMABrA==',key_name='tempest-keypair-1751719941',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-zqu0onmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=64f6896c-17f2-4ceb-98b9-50a541c98b7b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.138 233728 DEBUG nova.network.os_vif_util [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.138 233728 DEBUG nova.network.os_vif_util [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.139 233728 DEBUG os_vif [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.140 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.141 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.141 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.147 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa2f5192-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.148 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa2f5192-1c, col_values=(('external_ids', {'iface-id': 'aa2f5192-1c39-497e-8a7b-31d50bd48eb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:dd:70', 'vm-uuid': '64f6896c-17f2-4ceb-98b9-50a541c98b7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.149 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:09 np0005539552 NetworkManager[48926]: <info>  [1764405189.1515] manager: (tapaa2f5192-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.153 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.157 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.158 233728 INFO os_vif [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c')#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.217 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.218 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.218 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No VIF found with MAC fa:16:3e:43:dd:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.219 233728 INFO nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Using config drive#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.246 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.708 233728 DEBUG nova.compute.manager [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.709 233728 DEBUG nova.compute.manager [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.709 233728 DEBUG oslo_concurrency.lockutils [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.709 233728 DEBUG oslo_concurrency.lockutils [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.710 233728 DEBUG nova.network.neutron [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.951 233728 INFO nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Creating config drive at /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/disk.config#033[00m
Nov 29 03:33:09 np0005539552 nova_compute[233724]: 2025-11-29 08:33:09.957 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8f0de9n5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.092 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8f0de9n5" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.122 233728 DEBUG nova.storage.rbd_utils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] rbd image 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.127 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/disk.config 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.776 233728 DEBUG oslo_concurrency.processutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/disk.config 64f6896c-17f2-4ceb-98b9-50a541c98b7b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.777 233728 INFO nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Deleting local config drive /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:10 np0005539552 kernel: tapaa2f5192-1c: entered promiscuous mode
Nov 29 03:33:10 np0005539552 NetworkManager[48926]: <info>  [1764405190.8320] manager: (tapaa2f5192-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Nov 29 03:33:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:10Z|00774|binding|INFO|Claiming lport aa2f5192-1c39-497e-8a7b-31d50bd48eb7 for this chassis.
Nov 29 03:33:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:10Z|00775|binding|INFO|aa2f5192-1c39-497e-8a7b-31d50bd48eb7: Claiming fa:16:3e:43:dd:70 10.100.0.12
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.838 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.848 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:dd:70 10.100.0.12'], port_security=['fa:16:3e:43:dd:70 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '64f6896c-17f2-4ceb-98b9-50a541c98b7b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0fe8682-2627-4ea9-b1b2-e4f9229a87b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=aa2f5192-1c39-497e-8a7b-31d50bd48eb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.849 143400 INFO neutron.agent.ovn.metadata.agent [-] Port aa2f5192-1c39-497e-8a7b-31d50bd48eb7 in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 bound to our chassis#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.851 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6d35cfb-cc41-4788-977c-b8e5140795a0#033[00m
Nov 29 03:33:10 np0005539552 systemd-udevd[303285]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:10Z|00776|binding|INFO|Setting lport aa2f5192-1c39-497e-8a7b-31d50bd48eb7 ovn-installed in OVS
Nov 29 03:33:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:10Z|00777|binding|INFO|Setting lport aa2f5192-1c39-497e-8a7b-31d50bd48eb7 up in Southbound
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:10 np0005539552 nova_compute[233724]: 2025-11-29 08:33:10.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e357a3-fbde-4dc6-a2c3-fcac2ec92116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.874 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6d35cfb-c1 in ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:33:10 np0005539552 systemd-machined[196379]: New machine qemu-78-instance-000000a8.
Nov 29 03:33:10 np0005539552 NetworkManager[48926]: <info>  [1764405190.8773] device (tapaa2f5192-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:33:10 np0005539552 NetworkManager[48926]: <info>  [1764405190.8788] device (tapaa2f5192-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.879 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6d35cfb-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.879 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad29c8-1ac7-444e-9e50-d844499d4cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.881 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[25d8ff6a-f106-48d3-b4ab-36ca83da400f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.892 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[604274fe-4a70-4faf-9972-05064992b85d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 systemd[1]: Started Virtual Machine qemu-78-instance-000000a8.
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.918 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b503a9f8-0f32-4f84-969a-c4c127a9cbbd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.955 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[24194d79-da9e-4ae6-83fe-332b6fb446dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 NetworkManager[48926]: <info>  [1764405190.9620] manager: (tapd6d35cfb-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/341)
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.963 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3fde0d-f2fe-4ef4-832e-4db3f08d63a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.993 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6436ef74-92a2-40bd-8f1b-754aaa093032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:10 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:10.996 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[923d42ec-2e45-42af-b416-1adbf303c20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 NetworkManager[48926]: <info>  [1764405191.0188] device (tapd6d35cfb-c0): carrier: link connected
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.024 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc2f09b-48f9-4aa7-9cdc-22ec2cef2d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.040 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec4ec7e-ceef-46d8-9664-15db28d1c762]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823670, 'reachable_time': 39752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303319, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.056 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[983b8748-aecc-4b0f-96bb-36819d6e21d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:5b88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823670, 'tstamp': 823670}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303320, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.072 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[69739bdb-6a08-40d9-a981-0f6f13937714]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6d35cfb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:5b:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823670, 'reachable_time': 39752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303321, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.110 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3675af2e-421c-46e7-9f89-5e25a925cf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.180 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[05136d2e-3c4d-4b46-ad78-158e06c0318b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.182 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.182 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.183 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6d35cfb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:11 np0005539552 NetworkManager[48926]: <info>  [1764405191.1851] manager: (tapd6d35cfb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Nov 29 03:33:11 np0005539552 kernel: tapd6d35cfb-c0: entered promiscuous mode
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.188 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6d35cfb-c0, col_values=(('external_ids', {'iface-id': '070d06ed-b610-481b-b747-9c7d0eb2bcf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:11 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:11Z|00778|binding|INFO|Releasing lport 070d06ed-b610-481b-b747-9c7d0eb2bcf2 from this chassis (sb_readonly=0)
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.206 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.215 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.216 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.217 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.218 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a1389163-bb06-43a5-a434-b051b03a43b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.219 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/d6d35cfb-cc41-4788-977c-b8e5140795a0.pid.haproxy
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID d6d35cfb-cc41-4788-977c-b8e5140795a0
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.219 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'env', 'PROCESS_TAG=haproxy-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6d35cfb-cc41-4788-977c-b8e5140795a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.336 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405191.3357382, 64f6896c-17f2-4ceb-98b9-50a541c98b7b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.336 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.362 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.366 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405191.336596, 64f6896c-17f2-4ceb-98b9-50a541c98b7b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.367 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.389 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.392 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.399 233728 DEBUG nova.network.neutron [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.399 233728 DEBUG nova.network.neutron [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.422 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.611 233728 DEBUG oslo_concurrency.lockutils [req-d04b3a70-2ee2-40e6-af9e-e68eac91853b req-bc826253-0081-4a20-a338-56fe399e6d07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.614 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:11.614 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:11 np0005539552 podman[303395]: 2025-11-29 08:33:11.549666407 +0000 UTC m=+0.021156240 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:33:11 np0005539552 podman[303395]: 2025-11-29 08:33:11.690878237 +0000 UTC m=+0.162368050 container create 0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:33:11 np0005539552 systemd[1]: Started libpod-conmon-0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838.scope.
Nov 29 03:33:11 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:33:11 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3c85f93d1106a4721e65968ec806d7d0ad6098df5184b122729b9ad667c9af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.867 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.954 233728 DEBUG nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.954 233728 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.955 233728 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.956 233728 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.956 233728 DEBUG nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Processing event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.956 233728 DEBUG nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.957 233728 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.958 233728 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.958 233728 DEBUG oslo_concurrency.lockutils [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.958 233728 DEBUG nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] No waiting events found dispatching network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.959 233728 WARNING nova.compute.manager [req-57b24aa9-5be2-42e1-9891-6594ed1c2185 req-135a454c-ddd7-4139-98b2-ac5539f57389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received unexpected event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.960 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.965 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.971 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405191.9709306, 64f6896c-17f2-4ceb-98b9-50a541c98b7b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.972 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:11 np0005539552 podman[303395]: 2025-11-29 08:33:11.977294753 +0000 UTC m=+0.448784656 container init 0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.984 233728 INFO nova.virt.libvirt.driver [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Instance spawned successfully.#033[00m
Nov 29 03:33:11 np0005539552 nova_compute[233724]: 2025-11-29 08:33:11.985 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:11 np0005539552 podman[303395]: 2025-11-29 08:33:11.986548372 +0000 UTC m=+0.458038215 container start 0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.001 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.009 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.016 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.017 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.017 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.018 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.019 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.019 233728 DEBUG nova.virt.libvirt.driver [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:12 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [NOTICE]   (303414) : New worker (303416) forked
Nov 29 03:33:12 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [NOTICE]   (303414) : Loading success.
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.032 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:12.039 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.220 233728 INFO nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Took 10.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.221 233728 DEBUG nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.359 233728 INFO nova.compute.manager [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Took 11.80 seconds to build instance.#033[00m
Nov 29 03:33:12 np0005539552 nova_compute[233724]: 2025-11-29 08:33:12.429 233728 DEBUG oslo_concurrency.lockutils [None req-63e77629-45ef-4e9a-a448-ea257e3b79ba 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:12.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.025 233728 DEBUG oslo_concurrency.lockutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.026 233728 DEBUG oslo_concurrency.lockutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.046 233728 DEBUG nova.objects.instance [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.079 233728 DEBUG oslo_concurrency.lockutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.341 233728 DEBUG oslo_concurrency.lockutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.342 233728 DEBUG oslo_concurrency.lockutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.342 233728 INFO nova.compute.manager [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Attaching volume d51465d5-c782-4ab5-86e5-16500d7ed93e to /dev/vdb#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.588 233728 DEBUG os_brick.utils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.589 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.601 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.601 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[16d8dfae-7377-4be8-9e5f-fe97799bd522]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.602 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.611 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.611 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[42f7691a-c16c-4512-8ce3-445791e4e63d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.613 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.621 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.622 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e807e28-fb10-4c7c-bb5b-8970dd7ac8d9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.623 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[5280b3a6-1c87-4fd3-8618-134548825228]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.623 233728 DEBUG oslo_concurrency.processutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.652 233728 DEBUG oslo_concurrency.processutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.655 233728 DEBUG os_brick.initiator.connectors.lightos [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.655 233728 DEBUG os_brick.initiator.connectors.lightos [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.655 233728 DEBUG os_brick.initiator.connectors.lightos [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.655 233728 DEBUG os_brick.utils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:33:13 np0005539552 nova_compute[233724]: 2025-11-29 08:33:13.656 233728 DEBUG nova.virt.block_device [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating existing volume attachment record: 1634e0ac-a969-4eaf-8b49-e4304a7b066e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.150 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774738147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:14.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.453 233728 DEBUG nova.objects.instance [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.480 233728 DEBUG nova.virt.libvirt.driver [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Attempting to attach volume d51465d5-c782-4ab5-86e5-16500d7ed93e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.483 233728 DEBUG nova.virt.libvirt.guest [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:33:14 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:33:14 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:33:14 np0005539552 nova_compute[233724]:  <shareable/>
Nov 29 03:33:14 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:33:14 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.624 233728 DEBUG nova.virt.libvirt.driver [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.624 233728 DEBUG nova.virt.libvirt.driver [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.625 233728 DEBUG nova.virt.libvirt.driver [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.625 233728 DEBUG nova.virt.libvirt.driver [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:a3:51:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:14 np0005539552 nova_compute[233724]: 2025-11-29 08:33:14.870 233728 DEBUG oslo_concurrency.lockutils [None req-5dd48fd6-1c0b-4c40-9c58-fb405363c4ec c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:15.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:15 np0005539552 nova_compute[233724]: 2025-11-29 08:33:15.825 233728 DEBUG nova.compute.manager [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-changed-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:15 np0005539552 nova_compute[233724]: 2025-11-29 08:33:15.825 233728 DEBUG nova.compute.manager [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Refreshing instance network info cache due to event network-changed-aa2f5192-1c39-497e-8a7b-31d50bd48eb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:15 np0005539552 nova_compute[233724]: 2025-11-29 08:33:15.826 233728 DEBUG oslo_concurrency.lockutils [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:15 np0005539552 nova_compute[233724]: 2025-11-29 08:33:15.826 233728 DEBUG oslo_concurrency.lockutils [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:15 np0005539552 nova_compute[233724]: 2025-11-29 08:33:15.826 233728 DEBUG nova.network.neutron [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Refreshing network info cache for port aa2f5192-1c39-497e-8a7b-31d50bd48eb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:16.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:16 np0005539552 nova_compute[233724]: 2025-11-29 08:33:16.870 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:16 np0005539552 nova_compute[233724]: 2025-11-29 08:33:16.909 233728 DEBUG nova.network.neutron [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updated VIF entry in instance network info cache for port aa2f5192-1c39-497e-8a7b-31d50bd48eb7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:16 np0005539552 nova_compute[233724]: 2025-11-29 08:33:16.909 233728 DEBUG nova.network.neutron [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating instance_info_cache with network_info: [{"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:16 np0005539552 nova_compute[233724]: 2025-11-29 08:33:16.924 233728 DEBUG oslo_concurrency.lockutils [req-aef44612-ba6f-4f4b-ac51-63d3b0a20bb0 req-0d3d5ff5-a84d-4cb0-87ca-526f91c2e65d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2868787315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:17.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:18.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:18 np0005539552 nova_compute[233724]: 2025-11-29 08:33:18.699 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:33:19 np0005539552 nova_compute[233724]: 2025-11-29 08:33:19.153 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:19.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:20.639 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:20.639 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:20.640 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:21.041 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:21.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:21 np0005539552 nova_compute[233724]: 2025-11-29 08:33:21.873 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:21 np0005539552 nova_compute[233724]: 2025-11-29 08:33:21.997 233728 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:21 np0005539552 nova_compute[233724]: 2025-11-29 08:33:21.998 233728 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:21 np0005539552 nova_compute[233724]: 2025-11-29 08:33:21.998 233728 DEBUG nova.network.neutron [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:22.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:23 np0005539552 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Nov 29 03:33:23 np0005539552 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a7.scope: Consumed 14.016s CPU time.
Nov 29 03:33:23 np0005539552 systemd-machined[196379]: Machine qemu-77-instance-000000a7 terminated.
Nov 29 03:33:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.312 233728 DEBUG nova.network.neutron [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.333 233728 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.435 233728 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.436 233728 DEBUG nova.virt.libvirt.volume.remotefs [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Creating file /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/c9e2554782bd4b3e8379d174028381f0.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.437 233728 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/c9e2554782bd4b3e8379d174028381f0.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:23.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.721 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance shutdown successfully after 15 seconds.#033[00m
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.728 233728 INFO nova.virt.libvirt.driver [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance destroyed successfully.#033[00m
Nov 29 03:33:23 np0005539552 nova_compute[233724]: 2025-11-29 08:33:23.734 233728 INFO nova.virt.libvirt.driver [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance destroyed successfully.#033[00m
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.035 233728 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/c9e2554782bd4b3e8379d174028381f0.tmp" returned: 1 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.036 233728 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/c9e2554782bd4b3e8379d174028381f0.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.036 233728 DEBUG nova.virt.libvirt.volume.remotefs [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Creating directory /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.037 233728 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:24 np0005539552 podman[303531]: 2025-11-29 08:33:24.149460883 +0000 UTC m=+0.064712442 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.156 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:24 np0005539552 podman[303530]: 2025-11-29 08:33:24.169872182 +0000 UTC m=+0.092563511 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:33:24 np0005539552 podman[303532]: 2025-11-29 08:33:24.185537794 +0000 UTC m=+0.095113991 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.253 233728 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:24 np0005539552 nova_compute[233724]: 2025-11-29 08:33:24.256 233728 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:33:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:24.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:25.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:26.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:26 np0005539552 nova_compute[233724]: 2025-11-29 08:33:26.875 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.200 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deleting instance files /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2_del#033[00m
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.201 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deletion of /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2_del complete#033[00m
Nov 29 03:33:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:28.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:28 np0005539552 kernel: tap5369324b-4a (unregistering): left promiscuous mode
Nov 29 03:33:28 np0005539552 NetworkManager[48926]: <info>  [1764405208.7753] device (tap5369324b-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:33:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:28Z|00779|binding|INFO|Releasing lport 5369324b-4a12-4cff-807c-444de53025fa from this chassis (sb_readonly=0)
Nov 29 03:33:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:28Z|00780|binding|INFO|Setting lport 5369324b-4a12-4cff-807c-444de53025fa down in Southbound
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.788 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:28Z|00781|binding|INFO|Removing iface tap5369324b-4a ovn-installed in OVS
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.792 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.819 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:28.844 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:51:12 10.100.0.11'], port_security=['fa:16:3e:a3:51:12 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5369324b-4a12-4cff-807c-444de53025fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:28.847 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5369324b-4a12-4cff-807c-444de53025fa in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:33:28 np0005539552 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Nov 29 03:33:28 np0005539552 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a4.scope: Consumed 16.275s CPU time.
Nov 29 03:33:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:28.850 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:33:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:28.851 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f01e5c-81f3-4033-a5fb-0c95513de0bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:28.852 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 namespace which is not needed anymore#033[00m
Nov 29 03:33:28 np0005539552 systemd-machined[196379]: Machine qemu-76-instance-000000a4 terminated.
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.885 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.886 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Creating image(s)#033[00m
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.929 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:28 np0005539552 nova_compute[233724]: 2025-11-29 08:33:28.968 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.006 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:29 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [NOTICE]   (302390) : haproxy version is 2.8.14-c23fe91
Nov 29 03:33:29 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [NOTICE]   (302390) : path to executable is /usr/sbin/haproxy
Nov 29 03:33:29 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [WARNING]  (302390) : Exiting Master process...
Nov 29 03:33:29 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [WARNING]  (302390) : Exiting Master process...
Nov 29 03:33:29 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [ALERT]    (302390) : Current worker (302392) exited with code 143 (Terminated)
Nov 29 03:33:29 np0005539552 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[302375]: [WARNING]  (302390) : All workers exited. Exiting... (0)
Nov 29 03:33:29 np0005539552 systemd[1]: libpod-2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429.scope: Deactivated successfully.
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.016 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:29 np0005539552 podman[303659]: 2025-11-29 08:33:29.018229906 +0000 UTC m=+0.047986782 container died 2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:33:29 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429-userdata-shm.mount: Deactivated successfully.
Nov 29 03:33:29 np0005539552 systemd[1]: var-lib-containers-storage-overlay-2e89b1b6c42f4e70efd0d9366dd86ac5a1f7e82e2b64373cd151db32e57a6513-merged.mount: Deactivated successfully.
Nov 29 03:33:29 np0005539552 podman[303659]: 2025-11-29 08:33:29.057579285 +0000 UTC m=+0.087336161 container cleanup 2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:33:29 np0005539552 systemd[1]: libpod-conmon-2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429.scope: Deactivated successfully.
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.093 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.096 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.097 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.098 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:29 np0005539552 podman[303720]: 2025-11-29 08:33:29.134016502 +0000 UTC m=+0.054072086 container remove 2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.140 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.143 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ba258a98-102f-4bbf-bf83-17bef43b1707]: (4, ('Sat Nov 29 08:33:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 (2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429)\n2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429\nSat Nov 29 08:33:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 (2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429)\n2f6d9cfd1d293067e66f1d55f06e08b4b45811d6360047891bbec63414bdb429\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.144 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c37228eb-db6c-445b-aaf3-719e73f24193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.145 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.145 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:29 np0005539552 kernel: taped50ff83-50: left promiscuous mode
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.171 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[38f5814f-1b26-4041-bc70-cf9ab10acd65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.176 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.179 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.194 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7de83f76-ab93-4d9e-b104-d6687c888fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.195 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b911a93d-21c3-4623-93a8-8daf6c972ba7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.206 233728 DEBUG nova.compute.manager [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.206 233728 DEBUG oslo_concurrency.lockutils [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.207 233728 DEBUG oslo_concurrency.lockutils [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.207 233728 DEBUG oslo_concurrency.lockutils [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.207 233728 DEBUG nova.compute.manager [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.207 233728 WARNING nova.compute.manager [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.211 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[55eafb14-3902-4938-8ebd-8e415b910c31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820414, 'reachable_time': 23992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303760, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.214 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:33:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:29.214 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e17383c0-34cc-49da-b86b-568c00f4ad58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:29 np0005539552 systemd[1]: run-netns-ovnmeta\x2ded50ff83\x2d51d1\x2d4b35\x2db85c\x2d1cbe6fb812c6.mount: Deactivated successfully.
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.285 233728 INFO nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance shutdown successfully after 5 seconds.#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.294 233728 INFO nova.virt.libvirt.driver [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance destroyed successfully.#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.295 233728 DEBUG nova.virt.libvirt.vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:32:44Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.296 233728 DEBUG nova.network.os_vif_util [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.297 233728 DEBUG nova.network.os_vif_util [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.297 233728 DEBUG os_vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.298 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.299 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5369324b-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.300 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.302 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.304 233728 INFO os_vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a')#033[00m
Nov 29 03:33:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:29Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:dd:70 10.100.0.12
Nov 29 03:33:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:29Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:dd:70 10.100.0.12
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.431 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.516 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] resizing rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.565 233728 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.565 233728 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:29 np0005539552 nova_compute[233724]: 2025-11-29 08:33:29.566 233728 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:29.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.978 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.980 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Ensure instance console log exists: /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.980 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.981 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.981 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.983 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.991 233728 WARNING nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:33:30 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.999 233728 DEBUG nova.virt.libvirt.host [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:30.999 233728 DEBUG nova.virt.libvirt.host [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.004 233728 DEBUG nova.virt.libvirt.host [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.004 233728 DEBUG nova.virt.libvirt.host [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.005 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.006 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.006 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.006 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.007 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.007 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.007 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.008 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.008 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.008 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.008 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.009 233728 DEBUG nova.virt.hardware [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.009 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.027 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.333 233728 DEBUG nova.compute.manager [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.334 233728 DEBUG oslo_concurrency.lockutils [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.334 233728 DEBUG oslo_concurrency.lockutils [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.335 233728 DEBUG oslo_concurrency.lockutils [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.336 233728 DEBUG nova.compute.manager [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.336 233728 WARNING nova.compute.manager [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:33:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1101149099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.456 233728 DEBUG neutronclient.v2_0.client [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 5369324b-4a12-4cff-807c-444de53025fa for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.471 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.502 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.506 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.608 233728 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.609 233728 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.609 233728 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:31.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3386547186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.949 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:31 np0005539552 nova_compute[233724]: 2025-11-29 08:33:31.954 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <uuid>b116fe85-6509-4516-bc73-6cd5fd20ecc2</uuid>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <name>instance-000000a7</name>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:name>tempest-ServerShowV257Test-server-600349588</nova:name>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:33:30</nova:creationTime>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:user uuid="7147127ad2c248a6977704a1850eb832">tempest-ServerShowV257Test-94781273-project-member</nova:user>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <nova:project uuid="61c2a7b8d2c741a1af85aefdb0eb7132">tempest-ServerShowV257Test-94781273</nova:project>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <nova:ports/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <entry name="serial">b116fe85-6509-4516-bc73-6cd5fd20ecc2</entry>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <entry name="uuid">b116fe85-6509-4516-bc73-6cd5fd20ecc2</entry>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/console.log" append="off"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:33:31 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:33:31 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:33:31 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:33:31 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.026 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.026 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.027 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Using config drive#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.055 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.077 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.114 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'keypairs' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.342 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Creating config drive at /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.348 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z0dhodx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.501 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z0dhodx" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.533 233728 DEBUG nova.storage.rbd_utils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] rbd image b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:32 np0005539552 nova_compute[233724]: 2025-11-29 08:33:32.537 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:32 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:33 np0005539552 nova_compute[233724]: 2025-11-29 08:33:33.451 233728 DEBUG nova.compute.manager [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:33 np0005539552 nova_compute[233724]: 2025-11-29 08:33:33.452 233728 DEBUG nova.compute.manager [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:33 np0005539552 nova_compute[233724]: 2025-11-29 08:33:33.452 233728 DEBUG oslo_concurrency.lockutils [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:33 np0005539552 nova_compute[233724]: 2025-11-29 08:33:33.453 233728 DEBUG oslo_concurrency.lockutils [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:33 np0005539552 nova_compute[233724]: 2025-11-29 08:33:33.453 233728 DEBUG nova.network.neutron [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:33.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:34 np0005539552 nova_compute[233724]: 2025-11-29 08:33:34.301 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:34.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:34 np0005539552 nova_compute[233724]: 2025-11-29 08:33:34.801 233728 DEBUG oslo_concurrency.processutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config b116fe85-6509-4516-bc73-6cd5fd20ecc2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:34 np0005539552 nova_compute[233724]: 2025-11-29 08:33:34.802 233728 INFO nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deleting local config drive /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:34 np0005539552 systemd-machined[196379]: New machine qemu-79-instance-000000a7.
Nov 29 03:33:34 np0005539552 systemd[1]: Started Virtual Machine qemu-79-instance-000000a7.
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.252 233728 DEBUG nova.network.neutron [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.253 233728 DEBUG nova.network.neutron [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.273 233728 DEBUG oslo_concurrency.lockutils [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.368 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for b116fe85-6509-4516-bc73-6cd5fd20ecc2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.369 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405215.3678463, b116fe85-6509-4516-bc73-6cd5fd20ecc2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.369 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.371 233728 DEBUG nova.compute.manager [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.372 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.375 233728 INFO nova.virt.libvirt.driver [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance spawned successfully.#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.375 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.401 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.405 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.411 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.411 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.412 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.412 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.413 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.413 233728 DEBUG nova.virt.libvirt.driver [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.435 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.435 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405215.3679893, b116fe85-6509-4516-bc73-6cd5fd20ecc2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.435 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.459 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.469 233728 DEBUG nova.compute.manager [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.472 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.508 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.555 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.556 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.556 233728 DEBUG nova.objects.instance [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:33:35 np0005539552 nova_compute[233724]: 2025-11-29 08:33:35.608 233728 DEBUG oslo_concurrency.lockutils [None req-9e1111cb-9929-450a-873f-a2781c9f449d 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:35.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.852 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.852 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.853 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.853 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.854 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.856 233728 INFO nova.compute.manager [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Terminating instance#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.857 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "refresh_cache-b116fe85-6509-4516-bc73-6cd5fd20ecc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.858 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquired lock "refresh_cache-b116fe85-6509-4516-bc73-6cd5fd20ecc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.858 233728 DEBUG nova.network.neutron [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.947 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.949 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:33:36 np0005539552 nova_compute[233724]: 2025-11-29 08:33:36.950 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2570579186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:37 np0005539552 nova_compute[233724]: 2025-11-29 08:33:37.440 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:33:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:37.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:33:37 np0005539552 nova_compute[233724]: 2025-11-29 08:33:37.931 233728 DEBUG nova.network.neutron [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.069 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.070 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.070 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.073 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.073 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.077 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.077 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.254 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.255 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3851MB free_disk=20.69408416748047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.255 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.256 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.311 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration for instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.339 233728 INFO nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating resource usage from migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.340 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Starting to track outgoing migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.374 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance b116fe85-6509-4516-bc73-6cd5fd20ecc2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.374 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.374 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.375 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.375 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:33:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:38.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.517 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2942666447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.955 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:38 np0005539552 nova_compute[233724]: 2025-11-29 08:33:38.963 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.095 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.122 233728 DEBUG nova.network.neutron [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.128 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.129 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.136 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Releasing lock "refresh_cache-b116fe85-6509-4516-bc73-6cd5fd20ecc2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.137 233728 DEBUG nova.compute.manager [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.303 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:39 np0005539552 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Nov 29 03:33:39 np0005539552 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a7.scope: Consumed 4.439s CPU time.
Nov 29 03:33:39 np0005539552 systemd-machined[196379]: Machine qemu-79-instance-000000a7 terminated.
Nov 29 03:33:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.765 233728 INFO nova.virt.libvirt.driver [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance destroyed successfully.#033[00m
Nov 29 03:33:39 np0005539552 nova_compute[233724]: 2025-11-29 08:33:39.766 233728 DEBUG nova.objects.instance [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lazy-loading 'resources' on Instance uuid b116fe85-6509-4516-bc73-6cd5fd20ecc2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:40.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.190 233728 DEBUG nova.compute.manager [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.191 233728 DEBUG oslo_concurrency.lockutils [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.191 233728 DEBUG oslo_concurrency.lockutils [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.192 233728 DEBUG oslo_concurrency.lockutils [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.192 233728 DEBUG nova.compute.manager [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.192 233728 WARNING nova.compute.manager [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:33:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:41 np0005539552 nova_compute[233724]: 2025-11-29 08:33:41.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:42.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.749 233728 INFO nova.virt.libvirt.driver [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deleting instance files /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2_del#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.750 233728 INFO nova.virt.libvirt.driver [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deletion of /var/lib/nova/instances/b116fe85-6509-4516-bc73-6cd5fd20ecc2_del complete#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.801 233728 INFO nova.compute.manager [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Took 3.66 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.802 233728 DEBUG oslo.service.loopingcall [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.803 233728 DEBUG nova.compute.manager [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.803 233728 DEBUG nova.network.neutron [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.890 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.890 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.930 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:33:42 np0005539552 nova_compute[233724]: 2025-11-29 08:33:42.965 233728 DEBUG nova.network.neutron [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.000 233728 DEBUG nova.network.neutron [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.031 233728 INFO nova.compute.manager [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Took 0.23 seconds to deallocate network for instance.#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.052 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.053 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.062 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.062 233728 INFO nova.compute.claims [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.074 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.221 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.341 233728 DEBUG nova.compute.manager [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.342 233728 DEBUG oslo_concurrency.lockutils [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.343 233728 DEBUG oslo_concurrency.lockutils [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.344 233728 DEBUG oslo_concurrency.lockutils [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.344 233728 DEBUG nova.compute.manager [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.345 233728 WARNING nova.compute.manager [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:33:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4141451758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.648 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.655 233728 DEBUG nova.compute.provider_tree [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.673 233728 DEBUG nova.scheduler.client.report [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.714 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.715 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.721 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.778 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.778 233728 DEBUG nova.network.neutron [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.798 233728 INFO nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.824 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.866 233728 DEBUG oslo_concurrency.processutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.985 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.987 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:33:43 np0005539552 nova_compute[233724]: 2025-11-29 08:33:43.987 233728 INFO nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Creating image(s)#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.019 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.055 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.079 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.083 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.115 233728 DEBUG nova.policy [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.119 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405209.01465, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.119 233728 INFO nova.compute.manager [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.142 233728 DEBUG nova.compute.manager [None req-35b3e759-fae1-435e-9e68-cfbb94449c1e - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.148 233728 DEBUG nova.compute.manager [None req-35b3e759-fae1-435e-9e68-cfbb94449c1e - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.150 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.151 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.152 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.153 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.182 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.186 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.213 233728 INFO nova.compute.manager [None req-35b3e759-fae1-435e-9e68-cfbb94449c1e - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Nov 29 03:33:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.305 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/469528751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.333 233728 DEBUG oslo_concurrency.processutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.338 233728 DEBUG nova.compute.provider_tree [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.353 233728 DEBUG nova.scheduler.client.report [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.376 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.465 233728 INFO nova.scheduler.client.report [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Deleted allocations for instance b116fe85-6509-4516-bc73-6cd5fd20ecc2#033[00m
Nov 29 03:33:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.535 233728 DEBUG oslo_concurrency.lockutils [None req-8344f1ee-3bd4-405b-914b-4ed8406f9886 7147127ad2c248a6977704a1850eb832 61c2a7b8d2c741a1af85aefdb0eb7132 - - default default] Lock "b116fe85-6509-4516-bc73-6cd5fd20ecc2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:44 np0005539552 nova_compute[233724]: 2025-11-29 08:33:44.958 233728 DEBUG nova.network.neutron [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Successfully created port: 2ac4854b-6a8d-452d-a59a-40ea08ff4293 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.015 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.016 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.016 233728 DEBUG nova.compute.manager [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Going to confirm migration 23 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.129 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.130 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.130 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.130 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.130 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.130 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:33:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.769 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:45 np0005539552 nova_compute[233724]: 2025-11-29 08:33:45.846 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:33:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:46.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.754 233728 DEBUG nova.network.neutron [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Successfully updated port: 2ac4854b-6a8d-452d-a59a-40ea08ff4293 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.755 233728 DEBUG neutronclient.v2_0.client [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 5369324b-4a12-4cff-807c-444de53025fa for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.756 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.756 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.756 233728 DEBUG nova.network.neutron [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.756 233728 DEBUG nova.objects.instance [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'info_cache' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.758 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.760 233728 DEBUG nova.compute.manager [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-changed-2ac4854b-6a8d-452d-a59a-40ea08ff4293 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.760 233728 DEBUG nova.compute.manager [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Refreshing instance network info cache due to event network-changed-2ac4854b-6a8d-452d-a59a-40ea08ff4293. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.760 233728 DEBUG oslo_concurrency.lockutils [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1aa3620a-ecca-49bf-ae39-50911ab7a562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.760 233728 DEBUG oslo_concurrency.lockutils [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1aa3620a-ecca-49bf-ae39-50911ab7a562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.761 233728 DEBUG nova.network.neutron [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Refreshing network info cache for port 2ac4854b-6a8d-452d-a59a-40ea08ff4293 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.796 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-1aa3620a-ecca-49bf-ae39-50911ab7a562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:46 np0005539552 nova_compute[233724]: 2025-11-29 08:33:46.885 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.285 233728 DEBUG nova.objects.instance [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 1aa3620a-ecca-49bf-ae39-50911ab7a562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.318 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.319 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Ensure instance console log exists: /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.319 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.319 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.319 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.588 233728 DEBUG nova.network.neutron [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:47 np0005539552 nova_compute[233724]: 2025-11-29 08:33:47.921 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:48 np0005539552 nova_compute[233724]: 2025-11-29 08:33:48.460 233728 DEBUG nova.network.neutron [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:48 np0005539552 nova_compute[233724]: 2025-11-29 08:33:48.497 233728 DEBUG oslo_concurrency.lockutils [req-c061ddb6-e7f8-420f-985e-bcbc4f00da79 req-9523139b-9350-41de-82b9-52ba32b6fb75 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1aa3620a-ecca-49bf-ae39-50911ab7a562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:48 np0005539552 nova_compute[233724]: 2025-11-29 08:33:48.497 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-1aa3620a-ecca-49bf-ae39-50911ab7a562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:48 np0005539552 nova_compute[233724]: 2025-11-29 08:33:48.497 233728 DEBUG nova.network.neutron [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:48 np0005539552 nova_compute[233724]: 2025-11-29 08:33:48.792 233728 DEBUG nova.network.neutron [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:49 np0005539552 nova_compute[233724]: 2025-11-29 08:33:49.308 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:49 np0005539552 nova_compute[233724]: 2025-11-29 08:33:49.651 233728 DEBUG nova.network.neutron [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:49 np0005539552 nova_compute[233724]: 2025-11-29 08:33:49.684 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:49 np0005539552 nova_compute[233724]: 2025-11-29 08:33:49.685 233728 DEBUG nova.objects.instance [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:49.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.701 233728 DEBUG oslo_concurrency.lockutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.703 233728 DEBUG oslo_concurrency.lockutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.706 233728 DEBUG nova.storage.rbd_utils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] removing snapshot(nova-resize) on rbd image(8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.731 233728 DEBUG nova.objects.instance [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.774 233728 DEBUG oslo_concurrency.lockutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:33:50 np0005539552 nova_compute[233724]: 2025-11-29 08:33:50.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.004 233728 DEBUG nova.network.neutron [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Updating instance_info_cache with network_info: [{"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.035 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-1aa3620a-ecca-49bf-ae39-50911ab7a562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.035 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Instance network_info: |[{"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.038 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Start _get_guest_xml network_info=[{"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.046 233728 WARNING nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.050 233728 DEBUG nova.virt.libvirt.host [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.051 233728 DEBUG nova.virt.libvirt.host [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.055 233728 DEBUG nova.virt.libvirt.host [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.056 233728 DEBUG nova.virt.libvirt.host [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.057 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.057 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.058 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.058 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.058 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.059 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.059 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.059 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.060 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.060 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.060 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.060 233728 DEBUG nova.virt.hardware [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.063 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.110 233728 DEBUG oslo_concurrency.lockutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.111 233728 DEBUG oslo_concurrency.lockutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.111 233728 INFO nova.compute.manager [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attaching volume b180584d-b262-44fa-b3d3-a5cae1efac78 to /dev/vdb#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.182 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.183 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.183 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.183 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.295 233728 DEBUG os_brick.utils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.296 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.315 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.315 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b664baf-9ef6-4422-95ce-a5a725f0693f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.316 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.330 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.330 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[f96a84d1-6203-4320-8f36-bce2aa1abafc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.332 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.350 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.350 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e8398e41-4210-4352-a69e-81157bc44768]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.353 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1b3538-935c-432e-8c9c-59ad4ad3ed92]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.360 233728 DEBUG oslo_concurrency.processutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.389 233728 DEBUG oslo_concurrency.processutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.392 233728 DEBUG os_brick.initiator.connectors.lightos [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.393 233728 DEBUG os_brick.initiator.connectors.lightos [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.393 233728 DEBUG os_brick.initiator.connectors.lightos [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.394 233728 DEBUG os_brick.utils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.394 233728 DEBUG nova.virt.block_device [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating existing volume attachment record: 90c7fc65-fd53-40da-9002-ead00ac629f3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:33:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3465246009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.537 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.565 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.570 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:51 np0005539552 nova_compute[233724]: 2025-11-29 08:33:51.888 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/754793567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.083 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.086 233728 DEBUG nova.virt.libvirt.vif [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1110573693',display_name='tempest-TestNetworkBasicOps-server-1110573693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1110573693',id=169,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMeMAFLC8UXHYeqRMsgo4vCc+yHZb/bXPuLZbJ8zLTBrWGcwTomEZLuIN/ibqbhPHZF6cN972NcuNJo5BPXvofJKfDYsF2UdVA78jKWFBjJwJk+YknjzIiTx+qoa8vh0Mg==',key_name='tempest-TestNetworkBasicOps-280565887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-tn0poxjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:43Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1aa3620a-ecca-49bf-ae39-50911ab7a562,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.086 233728 DEBUG nova.network.os_vif_util [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.088 233728 DEBUG nova.network.os_vif_util [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.089 233728 DEBUG nova.objects.instance [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1aa3620a-ecca-49bf-ae39-50911ab7a562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.111 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <uuid>1aa3620a-ecca-49bf-ae39-50911ab7a562</uuid>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <name>instance-000000a9</name>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkBasicOps-server-1110573693</nova:name>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:33:51</nova:creationTime>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <nova:port uuid="2ac4854b-6a8d-452d-a59a-40ea08ff4293">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <entry name="serial">1aa3620a-ecca-49bf-ae39-50911ab7a562</entry>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <entry name="uuid">1aa3620a-ecca-49bf-ae39-50911ab7a562</entry>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/1aa3620a-ecca-49bf-ae39-50911ab7a562_disk">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/1aa3620a-ecca-49bf-ae39-50911ab7a562_disk.config">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:f9:5b:5b"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <target dev="tap2ac4854b-6a"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/console.log" append="off"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:33:52 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:33:52 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.113 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Preparing to wait for external event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.114 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.114 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.115 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.115 233728 DEBUG nova.virt.libvirt.vif [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1110573693',display_name='tempest-TestNetworkBasicOps-server-1110573693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1110573693',id=169,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMeMAFLC8UXHYeqRMsgo4vCc+yHZb/bXPuLZbJ8zLTBrWGcwTomEZLuIN/ibqbhPHZF6cN972NcuNJo5BPXvofJKfDYsF2UdVA78jKWFBjJwJk+YknjzIiTx+qoa8vh0Mg==',key_name='tempest-TestNetworkBasicOps-280565887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-tn0poxjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:43Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1aa3620a-ecca-49bf-ae39-50911ab7a562,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.116 233728 DEBUG nova.network.os_vif_util [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.116 233728 DEBUG nova.network.os_vif_util [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.117 233728 DEBUG os_vif [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.118 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.118 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.122 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.122 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ac4854b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.122 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ac4854b-6a, col_values=(('external_ids', {'iface-id': '2ac4854b-6a8d-452d-a59a-40ea08ff4293', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:5b:5b', 'vm-uuid': '1aa3620a-ecca-49bf-ae39-50911ab7a562'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539552 NetworkManager[48926]: <info>  [1764405232.1256] manager: (tap2ac4854b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.127 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957631055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.138 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.141 233728 INFO os_vif [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a')#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.152 233728 DEBUG nova.virt.libvirt.vif [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.153 233728 DEBUG nova.network.os_vif_util [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.153 233728 DEBUG nova.network.os_vif_util [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.154 233728 DEBUG os_vif [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.155 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.155 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5369324b-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.155 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.157 233728 INFO os_vif [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a')#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.157 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.157 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.217 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.217 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.217 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:f9:5b:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.218 233728 INFO nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Using config drive#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.237 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.293 233728 DEBUG nova.objects.instance [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.313 233728 DEBUG nova.virt.libvirt.driver [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attempting to attach volume b180584d-b262-44fa-b3d3-a5cae1efac78 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.315 233728 DEBUG nova.virt.libvirt.guest [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-b180584d-b262-44fa-b3d3-a5cae1efac78">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:33:52 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:33:52 np0005539552 nova_compute[233724]:  <serial>b180584d-b262-44fa-b3d3-a5cae1efac78</serial>
Nov 29 03:33:52 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:33:52 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.320 233728 DEBUG oslo_concurrency.processutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.448 233728 DEBUG nova.virt.libvirt.driver [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.448 233728 DEBUG nova.virt.libvirt.driver [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.448 233728 DEBUG nova.virt.libvirt.driver [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.450 233728 DEBUG nova.virt.libvirt.driver [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No VIF found with MAC fa:16:3e:43:dd:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:52.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2483054964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.736 233728 DEBUG oslo_concurrency.processutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.743 233728 DEBUG nova.compute.provider_tree [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.771 233728 INFO nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Creating config drive at /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/disk.config#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.783 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzw8mxqfu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.824 233728 DEBUG nova.scheduler.client.report [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.839 233728 DEBUG oslo_concurrency.lockutils [None req-d6032b06-3e82-472a-b7e1-e362f25084b1 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.902 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.930 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzw8mxqfu" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.962 233728 DEBUG nova.storage.rbd_utils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:52 np0005539552 nova_compute[233724]: 2025-11-29 08:33:52.966 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/disk.config 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.083 233728 INFO nova.scheduler.client.report [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocation for migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.087 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating instance_info_cache with network_info: [{"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.111 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-64f6896c-17f2-4ceb-98b9-50a541c98b7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.112 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.154 233728 DEBUG oslo_concurrency.lockutils [None req-e71eb99e-8aff-45b5-8552-13506d0fb486 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.162 233728 DEBUG oslo_concurrency.processutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/disk.config 1aa3620a-ecca-49bf-ae39-50911ab7a562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.163 233728 INFO nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Deleting local config drive /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.210 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 kernel: tap2ac4854b-6a: entered promiscuous mode
Nov 29 03:33:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:53Z|00782|binding|INFO|Claiming lport 2ac4854b-6a8d-452d-a59a-40ea08ff4293 for this chassis.
Nov 29 03:33:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:53Z|00783|binding|INFO|2ac4854b-6a8d-452d-a59a-40ea08ff4293: Claiming fa:16:3e:f9:5b:5b 10.100.0.28
Nov 29 03:33:53 np0005539552 NetworkManager[48926]: <info>  [1764405233.2370] manager: (tap2ac4854b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.237 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.247 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:5b 10.100.0.28'], port_security=['fa:16:3e:f9:5b:5b 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '1aa3620a-ecca-49bf-ae39-50911ab7a562', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156d5fe0-779f-4535-88d0-c8b31b6804e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c6811a8-ed3d-495c-b35f-bf9c02353bd9, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2ac4854b-6a8d-452d-a59a-40ea08ff4293) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.249 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac4854b-6a8d-452d-a59a-40ea08ff4293 in datapath bc08e02b-3e4b-4450-90c6-2c4703444c23 bound to our chassis#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.253 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc08e02b-3e4b-4450-90c6-2c4703444c23#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.276 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[503bb828-dde1-42bf-936d-8829732e1a63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.277 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc08e02b-31 in ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.280 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc08e02b-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.280 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[09a08aad-4b56-47c5-9e40-8145679eb694]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.282 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2d62c193-613a-438d-ac31-6d28e8ad0a0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 systemd-machined[196379]: New machine qemu-80-instance-000000a9.
Nov 29 03:33:53 np0005539552 systemd-udevd[304781]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.291 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 systemd[1]: Started Virtual Machine qemu-80-instance-000000a9.
Nov 29 03:33:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:53Z|00784|binding|INFO|Setting lport 2ac4854b-6a8d-452d-a59a-40ea08ff4293 ovn-installed in OVS
Nov 29 03:33:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:53Z|00785|binding|INFO|Setting lport 2ac4854b-6a8d-452d-a59a-40ea08ff4293 up in Southbound
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.298 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[91b34652-a36d-4895-bd44-a451bec815ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.300 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 NetworkManager[48926]: <info>  [1764405233.3127] device (tap2ac4854b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:33:53 np0005539552 NetworkManager[48926]: <info>  [1764405233.3151] device (tap2ac4854b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.316 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ba740d0e-268c-4b75-9d6a-d107ef64319e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.351 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fa891062-d935-4a3e-8362-697eed20e7d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 systemd-udevd[304785]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:53 np0005539552 NetworkManager[48926]: <info>  [1764405233.3603] manager: (tapbc08e02b-30): new Veth device (/org/freedesktop/NetworkManager/Devices/345)
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.358 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c8862221-5acc-42d7-82a4-e5de3fe9dcf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.404 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[45e30936-ec9b-47c5-a340-13c6736d8976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.408 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[842f6cb6-0807-4008-ac08-937bedefcbf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 NetworkManager[48926]: <info>  [1764405233.4345] device (tapbc08e02b-30): carrier: link connected
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.440 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[edeaaad3-7cd3-4511-a824-d659fcc57f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.466 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[07af27de-d4c0-4cb2-aa9d-8989cfaa6d61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc08e02b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:81:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827912, 'reachable_time': 35871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304813, 'error': None, 'target': 'ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.484 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6886149f-75cf-4844-b68d-24298179ec40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:8108'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827912, 'tstamp': 827912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304814, 'error': None, 'target': 'ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.502 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[99f40508-a38c-4696-b414-6979613eb8a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc08e02b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:81:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827912, 'reachable_time': 35871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304815, 'error': None, 'target': 'ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.553 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ba71e5-3d52-4cf5-bca8-75ef1d3c0bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.644 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[040ee9f5-cbef-4e83-8cfa-a63f84627d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.645 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc08e02b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.646 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.646 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc08e02b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:53 np0005539552 NetworkManager[48926]: <info>  [1764405233.6493] manager: (tapbc08e02b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Nov 29 03:33:53 np0005539552 kernel: tapbc08e02b-30: entered promiscuous mode
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.655 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc08e02b-30, col_values=(('external_ids', {'iface-id': '73115267-500c-4e1f-8d82-c8363d29755f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:33:53Z|00786|binding|INFO|Releasing lport 73115267-500c-4e1f-8d82-c8363d29755f from this chassis (sb_readonly=0)
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.666 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.685 233728 DEBUG nova.compute.manager [req-87fe1d84-85f0-45f6-8933-6c04d797fd5a req-6d4ea8d4-80d5-4074-8009-10ef0bd464b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.686 233728 DEBUG oslo_concurrency.lockutils [req-87fe1d84-85f0-45f6-8933-6c04d797fd5a req-6d4ea8d4-80d5-4074-8009-10ef0bd464b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.686 233728 DEBUG oslo_concurrency.lockutils [req-87fe1d84-85f0-45f6-8933-6c04d797fd5a req-6d4ea8d4-80d5-4074-8009-10ef0bd464b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.687 233728 DEBUG oslo_concurrency.lockutils [req-87fe1d84-85f0-45f6-8933-6c04d797fd5a req-6d4ea8d4-80d5-4074-8009-10ef0bd464b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.687 233728 DEBUG nova.compute.manager [req-87fe1d84-85f0-45f6-8933-6c04d797fd5a req-6d4ea8d4-80d5-4074-8009-10ef0bd464b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Processing event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.691 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 nova_compute[233724]: 2025-11-29 08:33:53.692 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.693 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc08e02b-3e4b-4450-90c6-2c4703444c23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc08e02b-3e4b-4450-90c6-2c4703444c23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.697 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[93c1995d-2fae-46b0-969b-1d41106fc1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.697 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-bc08e02b-3e4b-4450-90c6-2c4703444c23
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/bc08e02b-3e4b-4450-90c6-2c4703444c23.pid.haproxy
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID bc08e02b-3e4b-4450-90c6-2c4703444c23
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:33:53 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:33:53.698 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'env', 'PROCESS_TAG=haproxy-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc08e02b-3e4b-4450-90c6-2c4703444c23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:33:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:53.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:54 np0005539552 podman[304845]: 2025-11-29 08:33:54.140256678 +0000 UTC m=+0.071023252 container create 59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:33:54 np0005539552 systemd[1]: Started libpod-conmon-59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f.scope.
Nov 29 03:33:54 np0005539552 podman[304845]: 2025-11-29 08:33:54.100671563 +0000 UTC m=+0.031438227 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:33:54 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:33:54 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7af8dde786ce156f05cd2490d26193cc49a91f43a97c53285411a8a9fec2a67/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:54 np0005539552 podman[304845]: 2025-11-29 08:33:54.240722741 +0000 UTC m=+0.171489325 container init 59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:33:54 np0005539552 podman[304845]: 2025-11-29 08:33:54.251673496 +0000 UTC m=+0.182440060 container start 59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:33:54 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [NOTICE]   (304930) : New worker (304943) forked
Nov 29 03:33:54 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [NOTICE]   (304930) : Loading success.
Nov 29 03:33:54 np0005539552 podman[304862]: 2025-11-29 08:33:54.295044083 +0000 UTC m=+0.084059163 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:33:54 np0005539552 podman[304861]: 2025-11-29 08:33:54.299585085 +0000 UTC m=+0.096463636 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:33:54 np0005539552 podman[304886]: 2025-11-29 08:33:54.337804054 +0000 UTC m=+0.115071228 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.431 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405234.430414, 1aa3620a-ecca-49bf-ae39-50911ab7a562 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.431 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.434 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.438 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.443 233728 INFO nova.virt.libvirt.driver [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Instance spawned successfully.#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.443 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.460 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.464 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.478 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.480 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.481 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.482 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.483 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.484 233728 DEBUG nova.virt.libvirt.driver [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.501 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.502 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405234.4305568, 1aa3620a-ecca-49bf-ae39-50911ab7a562 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.502 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.544 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.548 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405234.4377277, 1aa3620a-ecca-49bf-ae39-50911ab7a562 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.548 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.565 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.568 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.573 233728 INFO nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Took 10.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.574 233728 DEBUG nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.599 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.643 233728 INFO nova.compute.manager [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Took 11.63 seconds to build instance.#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.666 233728 DEBUG oslo_concurrency.lockutils [None req-716080dc-e97e-4863-9e7f-afec3f998b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.762 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405219.761828, b116fe85-6509-4516-bc73-6cd5fd20ecc2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.763 233728 INFO nova.compute.manager [-] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:33:54 np0005539552 nova_compute[233724]: 2025-11-29 08:33:54.789 233728 DEBUG nova.compute.manager [None req-5b6f5d93-bfd7-4457-a35d-452a04ae5b18 - - - - - -] [instance: b116fe85-6509-4516-bc73-6cd5fd20ecc2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.849 233728 DEBUG nova.compute.manager [req-7e6b99e6-7ab3-4168-a345-37bf6b1204e2 req-25521164-2eab-410a-a68f-6d1d7849f740 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.850 233728 DEBUG oslo_concurrency.lockutils [req-7e6b99e6-7ab3-4168-a345-37bf6b1204e2 req-25521164-2eab-410a-a68f-6d1d7849f740 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.851 233728 DEBUG oslo_concurrency.lockutils [req-7e6b99e6-7ab3-4168-a345-37bf6b1204e2 req-25521164-2eab-410a-a68f-6d1d7849f740 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.851 233728 DEBUG oslo_concurrency.lockutils [req-7e6b99e6-7ab3-4168-a345-37bf6b1204e2 req-25521164-2eab-410a-a68f-6d1d7849f740 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.852 233728 DEBUG nova.compute.manager [req-7e6b99e6-7ab3-4168-a345-37bf6b1204e2 req-25521164-2eab-410a-a68f-6d1d7849f740 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] No waiting events found dispatching network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.853 233728 WARNING nova.compute.manager [req-7e6b99e6-7ab3-4168-a345-37bf6b1204e2 req-25521164-2eab-410a-a68f-6d1d7849f740 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received unexpected event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.971 233728 DEBUG oslo_concurrency.lockutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:55 np0005539552 nova_compute[233724]: 2025-11-29 08:33:55.972 233728 DEBUG oslo_concurrency.lockutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.006 233728 DEBUG nova.objects.instance [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.058 233728 DEBUG oslo_concurrency.lockutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.326 233728 DEBUG oslo_concurrency.lockutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.327 233728 DEBUG oslo_concurrency.lockutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.327 233728 INFO nova.compute.manager [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attaching volume 371e8431-582d-4b80-9978-50a1e275178d to /dev/vdc#033[00m
Nov 29 03:33:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:56.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.797 233728 DEBUG os_brick.utils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.799 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.817 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.817 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[4da47e6f-71e8-4d10-be30-a7371f754236]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.818 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.831 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.832 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd7e325-b33d-4aec-94ed-053f8256ade0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.834 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.847 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.847 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cbf662-28df-4516-9111-d1b5ca11d25b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.849 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd37573-1ba8-4493-b597-9b72b74db991]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.851 233728 DEBUG oslo_concurrency.processutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.894 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.900 233728 DEBUG oslo_concurrency.processutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "nvme version" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.903 233728 DEBUG os_brick.initiator.connectors.lightos [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.904 233728 DEBUG os_brick.initiator.connectors.lightos [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.905 233728 DEBUG os_brick.initiator.connectors.lightos [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.906 233728 DEBUG os_brick.utils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] <== get_connector_properties: return (107ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:33:56 np0005539552 nova_compute[233724]: 2025-11-29 08:33:56.906 233728 DEBUG nova.virt.block_device [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating existing volume attachment record: 7a734bd1-604a-4cf3-87e6-114a00ac2e25 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:33:57 np0005539552 nova_compute[233724]: 2025-11-29 08:33:57.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:58 np0005539552 nova_compute[233724]: 2025-11-29 08:33:58.198 233728 DEBUG nova.objects.instance [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:58 np0005539552 nova_compute[233724]: 2025-11-29 08:33:58.226 233728 DEBUG nova.virt.libvirt.driver [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attempting to attach volume 371e8431-582d-4b80-9978-50a1e275178d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:33:58 np0005539552 nova_compute[233724]: 2025-11-29 08:33:58.230 233728 DEBUG nova.virt.libvirt.guest [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-371e8431-582d-4b80-9978-50a1e275178d">
Nov 29 03:33:58 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:33:58 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:33:58 np0005539552 nova_compute[233724]:  <serial>371e8431-582d-4b80-9978-50a1e275178d</serial>
Nov 29 03:33:58 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:33:58 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:33:58 np0005539552 nova_compute[233724]: 2025-11-29 08:33:58.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:58.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:59 np0005539552 nova_compute[233724]: 2025-11-29 08:33:59.114 233728 DEBUG nova.virt.libvirt.driver [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:59 np0005539552 nova_compute[233724]: 2025-11-29 08:33:59.114 233728 DEBUG nova.virt.libvirt.driver [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:59 np0005539552 nova_compute[233724]: 2025-11-29 08:33:59.115 233728 DEBUG nova.virt.libvirt.driver [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:59 np0005539552 nova_compute[233724]: 2025-11-29 08:33:59.115 233728 DEBUG nova.virt.libvirt.driver [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:59 np0005539552 nova_compute[233724]: 2025-11-29 08:33:59.115 233728 DEBUG nova.virt.libvirt.driver [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] No VIF found with MAC fa:16:3e:43:dd:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:59 np0005539552 nova_compute[233724]: 2025-11-29 08:33:59.446 233728 DEBUG oslo_concurrency.lockutils [None req-f41edf91-cf76-41d7-a5cc-361b230d4cd6 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:33:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 03:34:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:00.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.293 233728 DEBUG oslo_concurrency.lockutils [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.295 233728 DEBUG oslo_concurrency.lockutils [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.315 233728 INFO nova.compute.manager [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Detaching volume b180584d-b262-44fa-b3d3-a5cae1efac78#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.695 233728 INFO nova.virt.block_device [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attempting to driver detach volume b180584d-b262-44fa-b3d3-a5cae1efac78 from mountpoint /dev/vdb#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.707 233728 DEBUG nova.virt.libvirt.driver [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Attempting to detach device vdb from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.708 233728 DEBUG nova.virt.libvirt.guest [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-b180584d-b262-44fa-b3d3-a5cae1efac78">
Nov 29 03:34:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <serial>b180584d-b262-44fa-b3d3-a5cae1efac78</serial>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:34:01 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.720 233728 INFO nova.virt.libvirt.driver [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully detached device vdb from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the persistent domain config.#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.721 233728 DEBUG nova.virt.libvirt.driver [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.722 233728 DEBUG nova.virt.libvirt.guest [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-b180584d-b262-44fa-b3d3-a5cae1efac78">
Nov 29 03:34:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <serial>b180584d-b262-44fa-b3d3-a5cae1efac78</serial>
Nov 29 03:34:01 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:34:01 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:34:01 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.891 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.927 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405241.9272149, 64f6896c-17f2-4ceb-98b9-50a541c98b7b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.929 233728 DEBUG nova.virt.libvirt.driver [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:34:01 np0005539552 nova_compute[233724]: 2025-11-29 08:34:01.932 233728 INFO nova.virt.libvirt.driver [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully detached device vdb from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the live domain config.#033[00m
Nov 29 03:34:02 np0005539552 nova_compute[233724]: 2025-11-29 08:34:02.127 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:02 np0005539552 nova_compute[233724]: 2025-11-29 08:34:02.231 233728 DEBUG nova.objects.instance [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:02 np0005539552 nova_compute[233724]: 2025-11-29 08:34:02.290 233728 DEBUG oslo_concurrency.lockutils [None req-d268c697-2c9a-4546-a016-75bcb6cec7f7 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.281 233728 DEBUG oslo_concurrency.lockutils [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.282 233728 DEBUG oslo_concurrency.lockutils [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.297 233728 INFO nova.compute.manager [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Detaching volume 371e8431-582d-4b80-9978-50a1e275178d#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.440 233728 INFO nova.virt.block_device [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Attempting to driver detach volume 371e8431-582d-4b80-9978-50a1e275178d from mountpoint /dev/vdc#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.448 233728 DEBUG nova.virt.libvirt.driver [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Attempting to detach device vdc from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.448 233728 DEBUG nova.virt.libvirt.guest [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-371e8431-582d-4b80-9978-50a1e275178d">
Nov 29 03:34:03 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <serial>371e8431-582d-4b80-9978-50a1e275178d</serial>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:34:03 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.459 233728 INFO nova.virt.libvirt.driver [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully detached device vdc from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the persistent domain config.#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.459 233728 DEBUG nova.virt.libvirt.driver [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.459 233728 DEBUG nova.virt.libvirt.guest [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-371e8431-582d-4b80-9978-50a1e275178d">
Nov 29 03:34:03 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <serial>371e8431-582d-4b80-9978-50a1e275178d</serial>
Nov 29 03:34:03 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:34:03 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:34:03 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.513 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405243.513498, 64f6896c-17f2-4ceb-98b9-50a541c98b7b => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.515 233728 DEBUG nova.virt.libvirt.driver [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.517 233728 INFO nova.virt.libvirt.driver [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully detached device vdc from instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b from the live domain config.#033[00m
Nov 29 03:34:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.851 233728 DEBUG nova.objects.instance [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'flavor' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:03 np0005539552 nova_compute[233724]: 2025-11-29 08:34:03.897 233728 DEBUG oslo_concurrency.lockutils [None req-1ffb5ee8-c517-4011-bd83-5a52f460a8fe 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:04.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.384 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.384 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.384 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.385 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.385 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.386 233728 INFO nova.compute.manager [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Terminating instance#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.386 233728 DEBUG nova.compute.manager [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:34:05 np0005539552 kernel: tapaa2f5192-1c (unregistering): left promiscuous mode
Nov 29 03:34:05 np0005539552 NetworkManager[48926]: <info>  [1764405245.4547] device (tapaa2f5192-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.458 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:05Z|00787|binding|INFO|Releasing lport aa2f5192-1c39-497e-8a7b-31d50bd48eb7 from this chassis (sb_readonly=0)
Nov 29 03:34:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:05Z|00788|binding|INFO|Setting lport aa2f5192-1c39-497e-8a7b-31d50bd48eb7 down in Southbound
Nov 29 03:34:05 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:05Z|00789|binding|INFO|Removing iface tapaa2f5192-1c ovn-installed in OVS
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.467 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:dd:70 10.100.0.12'], port_security=['fa:16:3e:43:dd:70 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '64f6896c-17f2-4ceb-98b9-50a541c98b7b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1981e9617628491f938ef0ef01c061c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0fe8682-2627-4ea9-b1b2-e4f9229a87b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f49e26a-f1b7-44a1-8f75-9c7ae476aa0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=aa2f5192-1c39-497e-8a7b-31d50bd48eb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.468 143400 INFO neutron.agent.ovn.metadata.agent [-] Port aa2f5192-1c39-497e-8a7b-31d50bd48eb7 in datapath d6d35cfb-cc41-4788-977c-b8e5140795a0 unbound from our chassis#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.470 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6d35cfb-cc41-4788-977c-b8e5140795a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.471 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[06b94ec6-f869-4162-bc20-8432a84be5fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.471 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 namespace which is not needed anymore#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.493 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Nov 29 03:34:05 np0005539552 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a8.scope: Consumed 16.156s CPU time.
Nov 29 03:34:05 np0005539552 systemd-machined[196379]: Machine qemu-78-instance-000000a8 terminated.
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.618 233728 INFO nova.virt.libvirt.driver [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Instance destroyed successfully.#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.621 233728 DEBUG nova.objects.instance [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lazy-loading 'resources' on Instance uuid 64f6896c-17f2-4ceb-98b9-50a541c98b7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:05 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [NOTICE]   (303414) : haproxy version is 2.8.14-c23fe91
Nov 29 03:34:05 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [NOTICE]   (303414) : path to executable is /usr/sbin/haproxy
Nov 29 03:34:05 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [WARNING]  (303414) : Exiting Master process...
Nov 29 03:34:05 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [ALERT]    (303414) : Current worker (303416) exited with code 143 (Terminated)
Nov 29 03:34:05 np0005539552 neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0[303410]: [WARNING]  (303414) : All workers exited. Exiting... (0)
Nov 29 03:34:05 np0005539552 systemd[1]: libpod-0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838.scope: Deactivated successfully.
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.647 233728 DEBUG nova.virt.libvirt.vif [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1071799825',display_name='tempest-AttachVolumeTestJSON-server-1071799825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1071799825',id=168,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8z7rrPeKOEKI612gpBMVEvIGpH3EMiF+hWdu6tWRiUAF9IFXNb+B4J5+W6qT7uDPKVKxau5gwrOF36u0kpS+8En2biuDD+O0UgFddmbT40+04wXSPyzWQWr4KYgMABrA==',key_name='tempest-keypair-1751719941',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1981e9617628491f938ef0ef01c061c5',ramdisk_id='',reservation_id='r-zqu0onmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-169198681',owner_user_name='tempest-AttachVolumeTestJSON-169198681-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b0fe4d78df74554a3a5875ab629d59c',uuid=64f6896c-17f2-4ceb-98b9-50a541c98b7b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.647 233728 DEBUG nova.network.os_vif_util [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converting VIF {"id": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "address": "fa:16:3e:43:dd:70", "network": {"id": "d6d35cfb-cc41-4788-977c-b8e5140795a0", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1510670722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1981e9617628491f938ef0ef01c061c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa2f5192-1c", "ovs_interfaceid": "aa2f5192-1c39-497e-8a7b-31d50bd48eb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.648 233728 DEBUG nova.network.os_vif_util [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.648 233728 DEBUG os_vif [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.650 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.650 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa2f5192-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.652 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.654 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:05 np0005539552 podman[305098]: 2025-11-29 08:34:05.654612727 +0000 UTC m=+0.064783624 container died 0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.656 233728 INFO os_vif [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:dd:70,bridge_name='br-int',has_traffic_filtering=True,id=aa2f5192-1c39-497e-8a7b-31d50bd48eb7,network=Network(d6d35cfb-cc41-4788-977c-b8e5140795a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa2f5192-1c')#033[00m
Nov 29 03:34:05 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838-userdata-shm.mount: Deactivated successfully.
Nov 29 03:34:05 np0005539552 systemd[1]: var-lib-containers-storage-overlay-0d3c85f93d1106a4721e65968ec806d7d0ad6098df5184b122729b9ad667c9af-merged.mount: Deactivated successfully.
Nov 29 03:34:05 np0005539552 podman[305098]: 2025-11-29 08:34:05.695509037 +0000 UTC m=+0.105679934 container cleanup 0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:05 np0005539552 systemd[1]: libpod-conmon-0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838.scope: Deactivated successfully.
Nov 29 03:34:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:05 np0005539552 podman[305151]: 2025-11-29 08:34:05.751597986 +0000 UTC m=+0.037179891 container remove 0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.758 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4b105902-c463-4866-9592-20a1c9f130fe]: (4, ('Sat Nov 29 08:34:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838)\n0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838\nSat Nov 29 08:34:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 (0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838)\n0cabf8fde97e5e132be4d210e7654a6fff6155e2ee5327239bc7c9a4bd105838\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.759 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[52394be5-fe3e-4033-b682-54a4e64e29f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.760 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6d35cfb-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:05 np0005539552 kernel: tapd6d35cfb-c0: left promiscuous mode
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.765 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.779 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.782 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[48a2e563-2213-44c4-895e-3487dfd29063]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.798 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[81515e82-40e6-4121-aebf-7bf086a54bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.799 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fc4e68-2470-4b17-8374-73ac9d43fa11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.816 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[987b9dc5-54f6-4327-a722-b299e6c5ae0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823662, 'reachable_time': 23022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305167, 'error': None, 'target': 'ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.818 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6d35cfb-cc41-4788-977c-b8e5140795a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:34:05 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:05.818 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9b99de5d-07c9-4d96-aaf7-fb77127a73ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:05 np0005539552 systemd[1]: run-netns-ovnmeta\x2dd6d35cfb\x2dcc41\x2d4788\x2d977c\x2db8e5140795a0.mount: Deactivated successfully.
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.948 233728 DEBUG nova.compute.manager [req-4c114480-ccd6-4165-8cd5-8e21ca83caca req-a9ab7e99-2a2b-4f59-8469-b273001d9c7a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-vif-unplugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.948 233728 DEBUG oslo_concurrency.lockutils [req-4c114480-ccd6-4165-8cd5-8e21ca83caca req-a9ab7e99-2a2b-4f59-8469-b273001d9c7a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.948 233728 DEBUG oslo_concurrency.lockutils [req-4c114480-ccd6-4165-8cd5-8e21ca83caca req-a9ab7e99-2a2b-4f59-8469-b273001d9c7a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.948 233728 DEBUG oslo_concurrency.lockutils [req-4c114480-ccd6-4165-8cd5-8e21ca83caca req-a9ab7e99-2a2b-4f59-8469-b273001d9c7a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.949 233728 DEBUG nova.compute.manager [req-4c114480-ccd6-4165-8cd5-8e21ca83caca req-a9ab7e99-2a2b-4f59-8469-b273001d9c7a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] No waiting events found dispatching network-vif-unplugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:05 np0005539552 nova_compute[233724]: 2025-11-29 08:34:05.949 233728 DEBUG nova.compute.manager [req-4c114480-ccd6-4165-8cd5-8e21ca83caca req-a9ab7e99-2a2b-4f59-8469-b273001d9c7a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-vif-unplugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.086 233728 INFO nova.virt.libvirt.driver [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Deleting instance files /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b_del#033[00m
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.087 233728 INFO nova.virt.libvirt.driver [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Deletion of /var/lib/nova/instances/64f6896c-17f2-4ceb-98b9-50a541c98b7b_del complete#033[00m
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.158 233728 INFO nova.compute.manager [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.159 233728 DEBUG oslo.service.loopingcall [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.161 233728 DEBUG nova.compute.manager [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.161 233728 DEBUG nova.network.neutron [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:34:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:06.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:06 np0005539552 nova_compute[233724]: 2025-11-29 08:34:06.894 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:07.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.081 233728 DEBUG nova.compute.manager [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.081 233728 DEBUG oslo_concurrency.lockutils [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.081 233728 DEBUG oslo_concurrency.lockutils [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.082 233728 DEBUG oslo_concurrency.lockutils [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.082 233728 DEBUG nova.compute.manager [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] No waiting events found dispatching network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.082 233728 WARNING nova.compute.manager [req-4935a155-3506-48f4-a253-3b7d38917c33 req-649246c9-7dee-4a44-b65b-1c4ea8f6f9b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received unexpected event network-vif-plugged-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.403 233728 DEBUG nova.network.neutron [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.424 233728 INFO nova.compute.manager [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Took 2.26 seconds to deallocate network for instance.#033[00m
Nov 29 03:34:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:08.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.508 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.509 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.545 233728 DEBUG nova.compute.manager [req-4c1e1735-e7f7-4f79-8ff3-4f9b45aab156 req-aa530e0d-f014-43f4-ab9c-c20e5f08604e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Received event network-vif-deleted-aa2f5192-1c39-497e-8a7b-31d50bd48eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:08 np0005539552 nova_compute[233724]: 2025-11-29 08:34:08.600 233728 DEBUG oslo_concurrency.processutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3197720191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:09 np0005539552 nova_compute[233724]: 2025-11-29 08:34:09.055 233728 DEBUG oslo_concurrency.processutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:09 np0005539552 nova_compute[233724]: 2025-11-29 08:34:09.064 233728 DEBUG nova.compute.provider_tree [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:09 np0005539552 nova_compute[233724]: 2025-11-29 08:34:09.082 233728 DEBUG nova.scheduler.client.report [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:09 np0005539552 nova_compute[233724]: 2025-11-29 08:34:09.111 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:09 np0005539552 nova_compute[233724]: 2025-11-29 08:34:09.140 233728 INFO nova.scheduler.client.report [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Deleted allocations for instance 64f6896c-17f2-4ceb-98b9-50a541c98b7b#033[00m
Nov 29 03:34:09 np0005539552 nova_compute[233724]: 2025-11-29 08:34:09.229 233728 DEBUG oslo_concurrency.lockutils [None req-564398a2-829e-4469-9061-b556e453c303 5b0fe4d78df74554a3a5875ab629d59c 1981e9617628491f938ef0ef01c061c5 - - default default] Lock "64f6896c-17f2-4ceb-98b9-50a541c98b7b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:09Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:5b:5b 10.100.0.28
Nov 29 03:34:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:09Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:5b:5b 10.100.0.28
Nov 29 03:34:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:09.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:10.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:10 np0005539552 nova_compute[233724]: 2025-11-29 08:34:10.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:11.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:11 np0005539552 nova_compute[233724]: 2025-11-29 08:34:11.895 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:12.010 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:12 np0005539552 nova_compute[233724]: 2025-11-29 08:34:12.010 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:12.011 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:34:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:12.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:14.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:34:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777408999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:34:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:34:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777408999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:34:15 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:15.013 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:15 np0005539552 nova_compute[233724]: 2025-11-29 08:34:15.656 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:16.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:16 np0005539552 nova_compute[233724]: 2025-11-29 08:34:16.900 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.311 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.311 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.312 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.312 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.312 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.314 233728 INFO nova.compute.manager [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Terminating instance#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.317 233728 DEBUG nova.compute.manager [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:34:18 np0005539552 kernel: tap2ac4854b-6a (unregistering): left promiscuous mode
Nov 29 03:34:18 np0005539552 NetworkManager[48926]: <info>  [1764405258.4008] device (tap2ac4854b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:18Z|00790|binding|INFO|Releasing lport 2ac4854b-6a8d-452d-a59a-40ea08ff4293 from this chassis (sb_readonly=0)
Nov 29 03:34:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:18Z|00791|binding|INFO|Setting lport 2ac4854b-6a8d-452d-a59a-40ea08ff4293 down in Southbound
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.410 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:18Z|00792|binding|INFO|Removing iface tap2ac4854b-6a ovn-installed in OVS
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.413 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.421 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:5b:5b 10.100.0.28'], port_security=['fa:16:3e:f9:5b:5b 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': '1aa3620a-ecca-49bf-ae39-50911ab7a562', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156d5fe0-779f-4535-88d0-c8b31b6804e2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c6811a8-ed3d-495c-b35f-bf9c02353bd9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2ac4854b-6a8d-452d-a59a-40ea08ff4293) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.422 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac4854b-6a8d-452d-a59a-40ea08ff4293 in datapath bc08e02b-3e4b-4450-90c6-2c4703444c23 unbound from our chassis#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.424 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc08e02b-3e4b-4450-90c6-2c4703444c23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.426 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5a253a-f813-423c-95e1-6b44203cb6c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.427 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23 namespace which is not needed anymore#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.444 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Nov 29 03:34:18 np0005539552 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a9.scope: Consumed 15.300s CPU time.
Nov 29 03:34:18 np0005539552 systemd-machined[196379]: Machine qemu-80-instance-000000a9 terminated.
Nov 29 03:34:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:18.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.566 233728 INFO nova.virt.libvirt.driver [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Instance destroyed successfully.#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.566 233728 DEBUG nova.objects.instance [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 1aa3620a-ecca-49bf-ae39-50911ab7a562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.601 233728 DEBUG nova.virt.libvirt.vif [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1110573693',display_name='tempest-TestNetworkBasicOps-server-1110573693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1110573693',id=169,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMeMAFLC8UXHYeqRMsgo4vCc+yHZb/bXPuLZbJ8zLTBrWGcwTomEZLuIN/ibqbhPHZF6cN972NcuNJo5BPXvofJKfDYsF2UdVA78jKWFBjJwJk+YknjzIiTx+qoa8vh0Mg==',key_name='tempest-TestNetworkBasicOps-280565887',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-tn0poxjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:54Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1aa3620a-ecca-49bf-ae39-50911ab7a562,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.603 233728 DEBUG nova.network.os_vif_util [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "address": "fa:16:3e:f9:5b:5b", "network": {"id": "bc08e02b-3e4b-4450-90c6-2c4703444c23", "bridge": "br-int", "label": "tempest-network-smoke--1498358289", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac4854b-6a", "ovs_interfaceid": "2ac4854b-6a8d-452d-a59a-40ea08ff4293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.605 233728 DEBUG nova.network.os_vif_util [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.605 233728 DEBUG os_vif [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.609 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ac4854b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.611 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.614 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.617 233728 INFO os_vif [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:5b:5b,bridge_name='br-int',has_traffic_filtering=True,id=2ac4854b-6a8d-452d-a59a-40ea08ff4293,network=Network(bc08e02b-3e4b-4450-90c6-2c4703444c23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac4854b-6a')#033[00m
Nov 29 03:34:18 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [NOTICE]   (304930) : haproxy version is 2.8.14-c23fe91
Nov 29 03:34:18 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [NOTICE]   (304930) : path to executable is /usr/sbin/haproxy
Nov 29 03:34:18 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [WARNING]  (304930) : Exiting Master process...
Nov 29 03:34:18 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [WARNING]  (304930) : Exiting Master process...
Nov 29 03:34:18 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [ALERT]    (304930) : Current worker (304943) exited with code 143 (Terminated)
Nov 29 03:34:18 np0005539552 neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23[304863]: [WARNING]  (304930) : All workers exited. Exiting... (0)
Nov 29 03:34:18 np0005539552 systemd[1]: libpod-59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f.scope: Deactivated successfully.
Nov 29 03:34:18 np0005539552 podman[305227]: 2025-11-29 08:34:18.64456809 +0000 UTC m=+0.063114760 container died 59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:34:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:18 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:34:18 np0005539552 systemd[1]: var-lib-containers-storage-overlay-b7af8dde786ce156f05cd2490d26193cc49a91f43a97c53285411a8a9fec2a67-merged.mount: Deactivated successfully.
Nov 29 03:34:18 np0005539552 podman[305227]: 2025-11-29 08:34:18.695966502 +0000 UTC m=+0.114513152 container cleanup 59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:18 np0005539552 systemd[1]: libpod-conmon-59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f.scope: Deactivated successfully.
Nov 29 03:34:18 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:34:18 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:34:18 np0005539552 podman[305279]: 2025-11-29 08:34:18.791065291 +0000 UTC m=+0.062993416 container remove 59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.801 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[470449ac-22a1-446c-bbf2-739b36f600f4]: (4, ('Sat Nov 29 08:34:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23 (59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f)\n59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f\nSat Nov 29 08:34:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23 (59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f)\n59010fc8c5fd38ef2c693b4bbf2c900b35c98af4dec97715060d699b96ba017f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.806 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9f99c9b8-1437-4726-a199-c873f03dffab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.807 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc08e02b-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.810 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 kernel: tapbc08e02b-30: left promiscuous mode
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.834 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f48b6f07-b959-4670-9d02-3ca8161576b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.852 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0eb24f-0e9d-4d5e-93d3-b64ce451d8bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.855 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c5b81a-1f47-4f13-8892-703c8742cc06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.855 233728 DEBUG nova.compute.manager [req-3e33e888-56a1-4c3d-934d-3f703b48de67 req-a0f21959-2d5e-412c-89a0-700f8c0cc741 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-vif-unplugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.856 233728 DEBUG oslo_concurrency.lockutils [req-3e33e888-56a1-4c3d-934d-3f703b48de67 req-a0f21959-2d5e-412c-89a0-700f8c0cc741 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.856 233728 DEBUG oslo_concurrency.lockutils [req-3e33e888-56a1-4c3d-934d-3f703b48de67 req-a0f21959-2d5e-412c-89a0-700f8c0cc741 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.856 233728 DEBUG oslo_concurrency.lockutils [req-3e33e888-56a1-4c3d-934d-3f703b48de67 req-a0f21959-2d5e-412c-89a0-700f8c0cc741 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.856 233728 DEBUG nova.compute.manager [req-3e33e888-56a1-4c3d-934d-3f703b48de67 req-a0f21959-2d5e-412c-89a0-700f8c0cc741 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] No waiting events found dispatching network-vif-unplugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:18 np0005539552 nova_compute[233724]: 2025-11-29 08:34:18.857 233728 DEBUG nova.compute.manager [req-3e33e888-56a1-4c3d-934d-3f703b48de67 req-a0f21959-2d5e-412c-89a0-700f8c0cc741 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-vif-unplugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.872 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a3882183-ce9e-47e4-84ed-78bc7b7c616a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827903, 'reachable_time': 21341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305300, 'error': None, 'target': 'ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:18 np0005539552 systemd[1]: run-netns-ovnmeta\x2dbc08e02b\x2d3e4b\x2d4450\x2d90c6\x2d2c4703444c23.mount: Deactivated successfully.
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.876 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc08e02b-3e4b-4450-90c6-2c4703444c23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:34:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:18.876 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[a9879f23-e71a-4305-b7b7-2225a6f1326d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:19 np0005539552 nova_compute[233724]: 2025-11-29 08:34:19.213 233728 INFO nova.virt.libvirt.driver [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Deleting instance files /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562_del#033[00m
Nov 29 03:34:19 np0005539552 nova_compute[233724]: 2025-11-29 08:34:19.214 233728 INFO nova.virt.libvirt.driver [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Deletion of /var/lib/nova/instances/1aa3620a-ecca-49bf-ae39-50911ab7a562_del complete#033[00m
Nov 29 03:34:19 np0005539552 nova_compute[233724]: 2025-11-29 08:34:19.305 233728 INFO nova.compute.manager [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:34:19 np0005539552 nova_compute[233724]: 2025-11-29 08:34:19.306 233728 DEBUG oslo.service.loopingcall [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:34:19 np0005539552 nova_compute[233724]: 2025-11-29 08:34:19.307 233728 DEBUG nova.compute.manager [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:34:19 np0005539552 nova_compute[233724]: 2025-11-29 08:34:19.307 233728 DEBUG nova.network.neutron [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:34:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:19.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.258 233728 DEBUG nova.network.neutron [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.285 233728 INFO nova.compute.manager [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Took 0.98 seconds to deallocate network for instance.#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.341 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.342 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.377 233728 DEBUG nova.compute.manager [req-7a1f5709-45a1-4104-b016-942572958675 req-c6f86f99-b4cc-461e-801b-526f51b0d748 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-vif-deleted-2ac4854b-6a8d-452d-a59a-40ea08ff4293 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.400 233728 DEBUG oslo_concurrency.processutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:20.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.617 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405245.6152089, 64f6896c-17f2-4ceb-98b9-50a541c98b7b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.618 233728 INFO nova.compute.manager [-] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.639 233728 DEBUG nova.compute.manager [None req-2afd8918-5229-4087-93ea-3ece02c836ba - - - - - -] [instance: 64f6896c-17f2-4ceb-98b9-50a541c98b7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:20.640 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:20.640 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:20.641 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1456098968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.884 233728 DEBUG oslo_concurrency.processutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.892 233728 DEBUG nova.compute.provider_tree [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.920 233728 DEBUG nova.scheduler.client.report [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.943 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.969 233728 DEBUG nova.compute.manager [req-a6bfe0d4-69ea-4486-9d6f-2d8b220ec5e4 req-cbed75b2-9b87-4408-bfc0-3c972e171779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.970 233728 DEBUG oslo_concurrency.lockutils [req-a6bfe0d4-69ea-4486-9d6f-2d8b220ec5e4 req-cbed75b2-9b87-4408-bfc0-3c972e171779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.970 233728 DEBUG oslo_concurrency.lockutils [req-a6bfe0d4-69ea-4486-9d6f-2d8b220ec5e4 req-cbed75b2-9b87-4408-bfc0-3c972e171779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.970 233728 DEBUG oslo_concurrency.lockutils [req-a6bfe0d4-69ea-4486-9d6f-2d8b220ec5e4 req-cbed75b2-9b87-4408-bfc0-3c972e171779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.971 233728 DEBUG nova.compute.manager [req-a6bfe0d4-69ea-4486-9d6f-2d8b220ec5e4 req-cbed75b2-9b87-4408-bfc0-3c972e171779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] No waiting events found dispatching network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.971 233728 WARNING nova.compute.manager [req-a6bfe0d4-69ea-4486-9d6f-2d8b220ec5e4 req-cbed75b2-9b87-4408-bfc0-3c972e171779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Received unexpected event network-vif-plugged-2ac4854b-6a8d-452d-a59a-40ea08ff4293 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:34:20 np0005539552 nova_compute[233724]: 2025-11-29 08:34:20.973 233728 INFO nova.scheduler.client.report [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 1aa3620a-ecca-49bf-ae39-50911ab7a562#033[00m
Nov 29 03:34:21 np0005539552 nova_compute[233724]: 2025-11-29 08:34:21.062 233728 DEBUG oslo_concurrency.lockutils [None req-52e3aed8-5e2e-48e3-93a0-df74dc27f815 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1aa3620a-ecca-49bf-ae39-50911ab7a562" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:21 np0005539552 nova_compute[233724]: 2025-11-29 08:34:21.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539552 nova_compute[233724]: 2025-11-29 08:34:21.459 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:21.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:21 np0005539552 nova_compute[233724]: 2025-11-29 08:34:21.906 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:22.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:23 np0005539552 nova_compute[233724]: 2025-11-29 08:34:23.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:23.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:24.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:24 np0005539552 podman[305351]: 2025-11-29 08:34:24.620696761 +0000 UTC m=+0.083901949 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:34:24 np0005539552 podman[305352]: 2025-11-29 08:34:24.620914697 +0000 UTC m=+0.078914855 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:34:24 np0005539552 podman[305353]: 2025-11-29 08:34:24.637950705 +0000 UTC m=+0.098834950 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:34:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:25.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:26.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:26 np0005539552 nova_compute[233724]: 2025-11-29 08:34:26.909 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 03:34:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:27.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:28.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:28 np0005539552 nova_compute[233724]: 2025-11-29 08:34:28.615 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:29.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:30.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:31.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:31 np0005539552 nova_compute[233724]: 2025-11-29 08:34:31.911 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:32.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:33 np0005539552 nova_compute[233724]: 2025-11-29 08:34:33.563 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405258.5603902, 1aa3620a-ecca-49bf-ae39-50911ab7a562 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:33 np0005539552 nova_compute[233724]: 2025-11-29 08:34:33.564 233728 INFO nova.compute.manager [-] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:33 np0005539552 nova_compute[233724]: 2025-11-29 08:34:33.587 233728 DEBUG nova.compute.manager [None req-b94b9fdd-f0d4-48e8-abee-7a991b97f90f - - - - - -] [instance: 1aa3620a-ecca-49bf-ae39-50911ab7a562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:33 np0005539552 nova_compute[233724]: 2025-11-29 08:34:33.617 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:33.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 03:34:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:34.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:36.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:36 np0005539552 nova_compute[233724]: 2025-11-29 08:34:36.913 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:37.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.946 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.947 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.997 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:37 np0005539552 nova_compute[233724]: 2025-11-29 08:34:37.997 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.010 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.078 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.079 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.085 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.085 233728 INFO nova.compute.claims [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.166 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2922840830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.425 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:38.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.619 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.652 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.654 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4216MB free_disk=20.760189056396484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.655 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/436665699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.717 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.725 233728 DEBUG nova.compute.provider_tree [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.750 233728 DEBUG nova.scheduler.client.report [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.775 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.776 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.781 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.845 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.846 233728 DEBUG nova.network.neutron [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.861 233728 INFO nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.877 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance ce8d0714-58a3-470e-bd4e-056510ea90cd actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.878 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.878 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.881 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.910 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683234377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:34:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683234377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.989 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.992 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:34:38 np0005539552 nova_compute[233724]: 2025-11-29 08:34:38.993 233728 INFO nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Creating image(s)#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.027 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.071 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.114 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.119 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.170 233728 DEBUG nova.policy [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdbcdbdc435844ee8d866288c969331b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368e3a44279843f5947188dd045d65b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.222 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.223 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.224 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.225 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.270 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.277 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ce8d0714-58a3-470e-bd4e-056510ea90cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3569060628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.376 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.383 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.401 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.435 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.436 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:39.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.851 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 ce8d0714-58a3-470e-bd4e-056510ea90cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:39 np0005539552 nova_compute[233724]: 2025-11-29 08:34:39.945 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] resizing rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.061 233728 DEBUG nova.objects.instance [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'migration_context' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.081 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.081 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Ensure instance console log exists: /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.082 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.083 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.083 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:40 np0005539552 nova_compute[233724]: 2025-11-29 08:34:40.243 233728 DEBUG nova.network.neutron [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Successfully created port: a49400ac-d3dd-4f0c-818e-9de880a48505 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:34:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:40.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:41.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:41 np0005539552 nova_compute[233724]: 2025-11-29 08:34:41.917 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:42.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:42 np0005539552 nova_compute[233724]: 2025-11-29 08:34:42.972 233728 DEBUG nova.network.neutron [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Successfully updated port: a49400ac-d3dd-4f0c-818e-9de880a48505 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:34:42 np0005539552 nova_compute[233724]: 2025-11-29 08:34:42.995 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:42 np0005539552 nova_compute[233724]: 2025-11-29 08:34:42.996 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquired lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:42 np0005539552 nova_compute[233724]: 2025-11-29 08:34:42.996 233728 DEBUG nova.network.neutron [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:43 np0005539552 nova_compute[233724]: 2025-11-29 08:34:43.187 233728 DEBUG nova.network.neutron [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:34:43 np0005539552 nova_compute[233724]: 2025-11-29 08:34:43.327 233728 DEBUG nova.compute.manager [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-changed-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:43 np0005539552 nova_compute[233724]: 2025-11-29 08:34:43.328 233728 DEBUG nova.compute.manager [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Refreshing instance network info cache due to event network-changed-a49400ac-d3dd-4f0c-818e-9de880a48505. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:43 np0005539552 nova_compute[233724]: 2025-11-29 08:34:43.328 233728 DEBUG oslo_concurrency.lockutils [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:43 np0005539552 nova_compute[233724]: 2025-11-29 08:34:43.622 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:43.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539552 nova_compute[233724]: 2025-11-29 08:34:44.438 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:44 np0005539552 nova_compute[233724]: 2025-11-29 08:34:44.440 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:34:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:44.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539552 nova_compute[233724]: 2025-11-29 08:34:44.926 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.291 233728 DEBUG nova.network.neutron [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating instance_info_cache with network_info: [{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.313 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Releasing lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.314 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Instance network_info: |[{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.315 233728 DEBUG oslo_concurrency.lockutils [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.315 233728 DEBUG nova.network.neutron [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Refreshing network info cache for port a49400ac-d3dd-4f0c-818e-9de880a48505 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.322 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Start _get_guest_xml network_info=[{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.330 233728 WARNING nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.342 233728 DEBUG nova.virt.libvirt.host [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.343 233728 DEBUG nova.virt.libvirt.host [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.354 233728 DEBUG nova.virt.libvirt.host [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.355 233728 DEBUG nova.virt.libvirt.host [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.357 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.357 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.358 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.359 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.359 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.359 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.360 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.360 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.361 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.361 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.362 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.362 233728 DEBUG nova.virt.hardware [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.368 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.369516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285369664, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2445, "num_deletes": 253, "total_data_size": 5610146, "memory_usage": 5692928, "flush_reason": "Manual Compaction"}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285396834, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3688570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57414, "largest_seqno": 59854, "table_properties": {"data_size": 3678752, "index_size": 6120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21563, "raw_average_key_size": 20, "raw_value_size": 3658652, "raw_average_value_size": 3528, "num_data_blocks": 266, "num_entries": 1037, "num_filter_entries": 1037, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405077, "oldest_key_time": 1764405077, "file_creation_time": 1764405285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 27368 microseconds, and 15713 cpu microseconds.
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.396889) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3688570 bytes OK
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.396915) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.398916) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.398944) EVENT_LOG_v1 {"time_micros": 1764405285398935, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.398972) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 5599258, prev total WAL file size 5599258, number of live WAL files 2.
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.401500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3602KB)], [114(10MB)]
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285401561, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 14703692, "oldest_snapshot_seqno": -1}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 9224 keys, 12793988 bytes, temperature: kUnknown
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285470939, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 12793988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12733505, "index_size": 36307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 239843, "raw_average_key_size": 26, "raw_value_size": 12570442, "raw_average_value_size": 1362, "num_data_blocks": 1411, "num_entries": 9224, "num_filter_entries": 9224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.471274) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12793988 bytes
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.472533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.6 rd, 184.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 9749, records dropped: 525 output_compression: NoCompression
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.472572) EVENT_LOG_v1 {"time_micros": 1764405285472557, "job": 72, "event": "compaction_finished", "compaction_time_micros": 69482, "compaction_time_cpu_micros": 32921, "output_level": 6, "num_output_files": 1, "total_output_size": 12793988, "num_input_records": 9749, "num_output_records": 9224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285473365, "job": 72, "event": "table_file_deletion", "file_number": 116}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285475340, "job": 72, "event": "table_file_deletion", "file_number": 114}
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.401379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.475429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.475435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.475438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.475441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:34:45.475444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1571010842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.827 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.872 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.879 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:45 np0005539552 nova_compute[233724]: 2025-11-29 08:34:45.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/387940022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.342 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.344 233728 DEBUG nova.virt.libvirt.vif [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-783047069',display_name='tempest-AttachVolumeNegativeTest-server-783047069',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-783047069',id=172,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ1Xc+6dsBZceum/czMPofdd4FTNhf/ndQpYyoZjcVG+jFJxNhiOKOiwnZLYlv/os34IcmoJyaVTOwUT0/a2YuvnygYYqx4uGxp3ffKh3qL1focM/X4J2iBlrp5VjFccQ==',key_name='tempest-keypair-1947412987',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-hyq0lk0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=ce8d0714-58a3-470e-bd4e-056510ea90cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.345 233728 DEBUG nova.network.os_vif_util [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.347 233728 DEBUG nova.network.os_vif_util [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.349 233728 DEBUG nova.objects.instance [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.372 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <uuid>ce8d0714-58a3-470e-bd4e-056510ea90cd</uuid>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <name>instance-000000ac</name>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:name>tempest-AttachVolumeNegativeTest-server-783047069</nova:name>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:34:45</nova:creationTime>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:user uuid="bdbcdbdc435844ee8d866288c969331b">tempest-AttachVolumeNegativeTest-1895715059-project-member</nova:user>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:project uuid="368e3a44279843f5947188dd045d65b6">tempest-AttachVolumeNegativeTest-1895715059</nova:project>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <nova:port uuid="a49400ac-d3dd-4f0c-818e-9de880a48505">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <entry name="serial">ce8d0714-58a3-470e-bd4e-056510ea90cd</entry>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <entry name="uuid">ce8d0714-58a3-470e-bd4e-056510ea90cd</entry>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ce8d0714-58a3-470e-bd4e-056510ea90cd_disk">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ce8d0714-58a3-470e-bd4e-056510ea90cd_disk.config">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:f3:44:de"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <target dev="tapa49400ac-d3"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/console.log" append="off"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:34:46 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:34:46 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:34:46 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:34:46 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.374 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Preparing to wait for external event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.374 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.375 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.375 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.376 233728 DEBUG nova.virt.libvirt.vif [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-783047069',display_name='tempest-AttachVolumeNegativeTest-server-783047069',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-783047069',id=172,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ1Xc+6dsBZceum/czMPofdd4FTNhf/ndQpYyoZjcVG+jFJxNhiOKOiwnZLYlv/os34IcmoJyaVTOwUT0/a2YuvnygYYqx4uGxp3ffKh3qL1focM/X4J2iBlrp5VjFccQ==',key_name='tempest-keypair-1947412987',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-hyq0lk0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=ce8d0714-58a3-470e-bd4e-056510ea90cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.377 233728 DEBUG nova.network.os_vif_util [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.378 233728 DEBUG nova.network.os_vif_util [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.379 233728 DEBUG os_vif [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.380 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.381 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.381 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.388 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.388 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49400ac-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.389 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa49400ac-d3, col_values=(('external_ids', {'iface-id': 'a49400ac-d3dd-4f0c-818e-9de880a48505', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:44:de', 'vm-uuid': 'ce8d0714-58a3-470e-bd4e-056510ea90cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:46 np0005539552 NetworkManager[48926]: <info>  [1764405286.3924] manager: (tapa49400ac-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.395 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.402 233728 INFO os_vif [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3')#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.496 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.497 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.497 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:f3:44:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.498 233728 INFO nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Using config drive#033[00m
Nov 29 03:34:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:46.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.544 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.919 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.929 233728 INFO nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Creating config drive at /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/disk.config#033[00m
Nov 29 03:34:46 np0005539552 nova_compute[233724]: 2025-11-29 08:34:46.942 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdvkub91z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:47 np0005539552 nova_compute[233724]: 2025-11-29 08:34:47.050 233728 DEBUG nova.network.neutron [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updated VIF entry in instance network info cache for port a49400ac-d3dd-4f0c-818e-9de880a48505. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:47 np0005539552 nova_compute[233724]: 2025-11-29 08:34:47.052 233728 DEBUG nova.network.neutron [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating instance_info_cache with network_info: [{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:47 np0005539552 nova_compute[233724]: 2025-11-29 08:34:47.077 233728 DEBUG oslo_concurrency.lockutils [req-c62d2ae6-b82e-475b-9051-79c99ebe3115 req-92e8148f-a2f8-4b33-824d-6c13e24e96e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:47 np0005539552 nova_compute[233724]: 2025-11-29 08:34:47.102 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdvkub91z" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:47 np0005539552 nova_compute[233724]: 2025-11-29 08:34:47.149 233728 DEBUG nova.storage.rbd_utils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image ce8d0714-58a3-470e-bd4e-056510ea90cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:47 np0005539552 nova_compute[233724]: 2025-11-29 08:34:47.154 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/disk.config ce8d0714-58a3-470e-bd4e-056510ea90cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.087 233728 DEBUG oslo_concurrency.processutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/disk.config ce8d0714-58a3-470e-bd4e-056510ea90cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.933s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.088 233728 INFO nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Deleting local config drive /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:34:48 np0005539552 kernel: tapa49400ac-d3: entered promiscuous mode
Nov 29 03:34:48 np0005539552 NetworkManager[48926]: <info>  [1764405288.1679] manager: (tapa49400ac-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Nov 29 03:34:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:48Z|00793|binding|INFO|Claiming lport a49400ac-d3dd-4f0c-818e-9de880a48505 for this chassis.
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:48Z|00794|binding|INFO|a49400ac-d3dd-4f0c-818e-9de880a48505: Claiming fa:16:3e:f3:44:de 10.100.0.3
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.177 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.185 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.192 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:44:de 10.100.0.3'], port_security=['fa:16:3e:f3:44:de 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce8d0714-58a3-470e-bd4e-056510ea90cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a027b644-782d-4129-90c4-f9aea2a9db99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a49400ac-d3dd-4f0c-818e-9de880a48505) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.194 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a49400ac-d3dd-4f0c-818e-9de880a48505 in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b bound to our chassis#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.197 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.214 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[44e00622-6fe9-4175-a5c3-8bcce0eb55d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.215 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0183ad73-01 in ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.218 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0183ad73-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.218 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[62caac5f-b924-4104-8a76-45edd4a1c367]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.219 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[62281973-2053-4ae7-bcfd-cb046097fe48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 systemd-machined[196379]: New machine qemu-81-instance-000000ac.
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.240 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[2c29ef8a-6aff-4fed-93a3-d69de49ef706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 systemd[1]: Started Virtual Machine qemu-81-instance-000000ac.
Nov 29 03:34:48 np0005539552 systemd-udevd[306010]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.273 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[625f6da3-8204-462f-b73b-d9c2e7e3df15]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 NetworkManager[48926]: <info>  [1764405288.2850] device (tapa49400ac-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:48 np0005539552 NetworkManager[48926]: <info>  [1764405288.2866] device (tapa49400ac-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.288 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:48Z|00795|binding|INFO|Setting lport a49400ac-d3dd-4f0c-818e-9de880a48505 ovn-installed in OVS
Nov 29 03:34:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:48Z|00796|binding|INFO|Setting lport a49400ac-d3dd-4f0c-818e-9de880a48505 up in Southbound
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.292 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.326 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f1972de0-0c22-4e5c-a6f1-123c48ad1190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.333 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[99e7a1dd-d67c-4bff-99dd-6cb66f3ab750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 NetworkManager[48926]: <info>  [1764405288.3346] manager: (tap0183ad73-00): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.376 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6188947e-15b6-4346-bf96-6732f371d32a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.379 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2b1d16-da39-43ba-9332-f522b5012aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 NetworkManager[48926]: <info>  [1764405288.4132] device (tap0183ad73-00): carrier: link connected
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.421 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[58c2e966-7e01-48f9-a705-5b9688692cb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.447 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0fda04f9-a229-42bc-b344-d2f51b48dda0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833410, 'reachable_time': 41856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306040, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.466 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[838cd048-39e2-41e8-ba1f-ae563192a980]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:aad0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 833410, 'tstamp': 833410}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306041, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.489 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8695b7-7b42-42d4-b26f-0f57bd9739f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833410, 'reachable_time': 41856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306042, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.526 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e4597e-f7b6-4075-aaaf-03c5afb88140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.588 233728 DEBUG nova.compute.manager [req-c40d7968-d646-4bf6-be2d-cf76410cab2d req-140fc4c7-5e49-4256-99b7-c986cb8b3e4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.588 233728 DEBUG oslo_concurrency.lockutils [req-c40d7968-d646-4bf6-be2d-cf76410cab2d req-140fc4c7-5e49-4256-99b7-c986cb8b3e4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.589 233728 DEBUG oslo_concurrency.lockutils [req-c40d7968-d646-4bf6-be2d-cf76410cab2d req-140fc4c7-5e49-4256-99b7-c986cb8b3e4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.589 233728 DEBUG oslo_concurrency.lockutils [req-c40d7968-d646-4bf6-be2d-cf76410cab2d req-140fc4c7-5e49-4256-99b7-c986cb8b3e4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.589 233728 DEBUG nova.compute.manager [req-c40d7968-d646-4bf6-be2d-cf76410cab2d req-140fc4c7-5e49-4256-99b7-c986cb8b3e4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Processing event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.599 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26e57d0a-ca19-4a08-b7ee-c91ffd0cbbeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.601 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.601 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.601 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0183ad73-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.603 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 kernel: tap0183ad73-00: entered promiscuous mode
Nov 29 03:34:48 np0005539552 NetworkManager[48926]: <info>  [1764405288.6043] manager: (tap0183ad73-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.605 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.606 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0183ad73-00, col_values=(('external_ids', {'iface-id': 'c88b07d7-f4c8-49a1-9950-8275afef03b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.607 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:48Z|00797|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:34:48 np0005539552 nova_compute[233724]: 2025-11-29 08:34:48.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.634 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.641 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[37b924d3-ce26-4790-9ead-6c3a7b35433c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.642 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:34:48 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:34:48.643 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'env', 'PROCESS_TAG=haproxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:34:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:49 np0005539552 podman[306111]: 2025-11-29 08:34:49.095889039 +0000 UTC m=+0.078628817 container create b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:34:49 np0005539552 podman[306111]: 2025-11-29 08:34:49.050553099 +0000 UTC m=+0.033292927 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:34:49 np0005539552 systemd[1]: Started libpod-conmon-b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5.scope.
Nov 29 03:34:49 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:34:49 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c5c1d2b0c81674a21cd704363954ad3e14e9f06dccaed0b65d9feb3829e913/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:49 np0005539552 podman[306111]: 2025-11-29 08:34:49.231933079 +0000 UTC m=+0.214672917 container init b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:49 np0005539552 podman[306111]: 2025-11-29 08:34:49.244579519 +0000 UTC m=+0.227319297 container start b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:34:49 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [NOTICE]   (306131) : New worker (306137) forked
Nov 29 03:34:49 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [NOTICE]   (306131) : Loading success.
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.347 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.348 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405289.3463888, ce8d0714-58a3-470e-bd4e-056510ea90cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.348 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.352 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.357 233728 INFO nova.virt.libvirt.driver [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Instance spawned successfully.#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.357 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.379 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.388 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.391 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.392 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.392 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.393 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.393 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.394 233728 DEBUG nova.virt.libvirt.driver [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.430 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.430 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405289.3476431, ce8d0714-58a3-470e-bd4e-056510ea90cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.431 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.462 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.466 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405289.351637, ce8d0714-58a3-470e-bd4e-056510ea90cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.466 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.474 233728 INFO nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Took 10.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.475 233728 DEBUG nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.485 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.487 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.525 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.539 233728 INFO nova.compute.manager [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Took 11.49 seconds to build instance.#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.554 233728 DEBUG oslo_concurrency.lockutils [None req-373252c3-1af7-4ba9-99e5-a094c8b9184f bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:49.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:49 np0005539552 nova_compute[233724]: 2025-11-29 08:34:49.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:34:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:50.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:50 np0005539552 nova_compute[233724]: 2025-11-29 08:34:50.665 233728 DEBUG nova.compute.manager [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:50 np0005539552 nova_compute[233724]: 2025-11-29 08:34:50.667 233728 DEBUG oslo_concurrency.lockutils [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:50 np0005539552 nova_compute[233724]: 2025-11-29 08:34:50.667 233728 DEBUG oslo_concurrency.lockutils [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:50 np0005539552 nova_compute[233724]: 2025-11-29 08:34:50.668 233728 DEBUG oslo_concurrency.lockutils [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:50 np0005539552 nova_compute[233724]: 2025-11-29 08:34:50.668 233728 DEBUG nova.compute.manager [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] No waiting events found dispatching network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:50 np0005539552 nova_compute[233724]: 2025-11-29 08:34:50.669 233728 WARNING nova.compute.manager [req-35d35561-c2d2-45a8-98e6-a938dbc25c16 req-767c6a90-894a-4f98-9791-a53c5d107e7f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received unexpected event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:34:51 np0005539552 NetworkManager[48926]: <info>  [1764405291.0008] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 29 03:34:51 np0005539552 NetworkManager[48926]: <info>  [1764405291.0023] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.001 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.196 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:34:51Z|00798|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.225 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.280 233728 DEBUG nova.compute.manager [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-changed-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.281 233728 DEBUG nova.compute.manager [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Refreshing instance network info cache due to event network-changed-a49400ac-d3dd-4f0c-818e-9de880a48505. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.282 233728 DEBUG oslo_concurrency.lockutils [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.282 233728 DEBUG oslo_concurrency.lockutils [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.283 233728 DEBUG nova.network.neutron [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Refreshing network info cache for port a49400ac-d3dd-4f0c-818e-9de880a48505 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:51 np0005539552 nova_compute[233724]: 2025-11-29 08:34:51.923 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:52 np0005539552 nova_compute[233724]: 2025-11-29 08:34:52.635 233728 DEBUG nova.network.neutron [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updated VIF entry in instance network info cache for port a49400ac-d3dd-4f0c-818e-9de880a48505. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:52 np0005539552 nova_compute[233724]: 2025-11-29 08:34:52.635 233728 DEBUG nova.network.neutron [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating instance_info_cache with network_info: [{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:52 np0005539552 nova_compute[233724]: 2025-11-29 08:34:52.746 233728 DEBUG oslo_concurrency.lockutils [req-950ff2ea-caea-4d8c-8a7b-74f34503f189 req-5496033c-fdb5-420e-9c13-eb69cf6e89c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:52 np0005539552 nova_compute[233724]: 2025-11-29 08:34:52.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:52 np0005539552 nova_compute[233724]: 2025-11-29 08:34:52.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:34:52 np0005539552 nova_compute[233724]: 2025-11-29 08:34:52.943 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:34:53 np0005539552 nova_compute[233724]: 2025-11-29 08:34:53.075 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:53 np0005539552 nova_compute[233724]: 2025-11-29 08:34:53.075 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:53 np0005539552 nova_compute[233724]: 2025-11-29 08:34:53.076 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:34:53 np0005539552 nova_compute[233724]: 2025-11-29 08:34:53.076 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:53.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:54 np0005539552 nova_compute[233724]: 2025-11-29 08:34:54.607 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating instance_info_cache with network_info: [{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:54 np0005539552 nova_compute[233724]: 2025-11-29 08:34:54.624 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:54 np0005539552 nova_compute[233724]: 2025-11-29 08:34:54.624 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:34:55 np0005539552 podman[306153]: 2025-11-29 08:34:55.029216918 +0000 UTC m=+0.103441214 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:34:55 np0005539552 podman[306154]: 2025-11-29 08:34:55.029904296 +0000 UTC m=+0.103541546 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:34:55 np0005539552 podman[306155]: 2025-11-29 08:34:55.084880896 +0000 UTC m=+0.150877671 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:34:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:55.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:56 np0005539552 nova_compute[233724]: 2025-11-29 08:34:56.394 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:56 np0005539552 nova_compute[233724]: 2025-11-29 08:34:56.927 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:57.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:58.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:34:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:34:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:59.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:00 np0005539552 nova_compute[233724]: 2025-11-29 08:35:00.602 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:01 np0005539552 nova_compute[233724]: 2025-11-29 08:35:01.398 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:01.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:01 np0005539552 nova_compute[233724]: 2025-11-29 08:35:01.926 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:02.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:35:03Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:44:de 10.100.0.3
Nov 29 03:35:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:35:03Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:44:de 10.100.0.3
Nov 29 03:35:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:03.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:04.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:05.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2392422890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:06 np0005539552 nova_compute[233724]: 2025-11-29 08:35:06.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:06 np0005539552 nova_compute[233724]: 2025-11-29 08:35:06.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:07.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:35:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 60K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1650 writes, 8000 keys, 1650 commit groups, 1.0 writes per commit group, ingest: 16.34 MB, 0.03 MB/s#012Interval WAL: 1650 writes, 1650 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     24.0      3.12              0.23        36    0.087       0      0       0.0       0.0#012  L6      1/0   12.20 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.0     94.5     80.9      4.63              1.02        35    0.132    245K    18K       0.0       0.0#012 Sum      1/0   12.20 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.0     56.4     58.0      7.75              1.25        71    0.109    245K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    123.3    125.7      0.57              0.25        10    0.057     47K   2628       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     94.5     80.9      4.63              1.02        35    0.132    245K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     24.3      3.08              0.23        35    0.088       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.1 total, 600.0 interval#012Flush(GB): cumulative 0.073, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.44 GB write, 0.09 MB/s write, 0.43 GB read, 0.09 MB/s read, 7.8 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 47.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000332 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2685,45.69 MB,15.0297%) FilterBlock(71,687.36 KB,0.220806%) IndexBlock(71,1.16 MB,0.380667%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:35:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:08.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:09.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:10.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:11 np0005539552 nova_compute[233724]: 2025-11-29 08:35:11.404 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:11.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:11 np0005539552 nova_compute[233724]: 2025-11-29 08:35:11.931 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 03:35:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:13 np0005539552 nova_compute[233724]: 2025-11-29 08:35:13.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:15.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:35:16.313 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:16 np0005539552 nova_compute[233724]: 2025-11-29 08:35:16.314 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:35:16.315 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:35:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:35:16.317 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:16 np0005539552 nova_compute[233724]: 2025-11-29 08:35:16.406 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:16 np0005539552 nova_compute[233724]: 2025-11-29 08:35:16.933 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:16 np0005539552 nova_compute[233724]: 2025-11-29 08:35:16.947 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:16 np0005539552 nova_compute[233724]: 2025-11-29 08:35:16.948 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:35:16 np0005539552 nova_compute[233724]: 2025-11-29 08:35:16.963 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:35:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 03:35:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 03:35:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:19.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:35:20.640 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:35:20.641 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:35:20.641 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:21 np0005539552 nova_compute[233724]: 2025-11-29 08:35:21.409 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:21 np0005539552 nova_compute[233724]: 2025-11-29 08:35:21.935 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:22.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:23.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:24.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:25 np0005539552 podman[306362]: 2025-11-29 08:35:25.208442051 +0000 UTC m=+0.059985385 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:35:25 np0005539552 podman[306361]: 2025-11-29 08:35:25.231900562 +0000 UTC m=+0.085532572 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:35:25 np0005539552 podman[306363]: 2025-11-29 08:35:25.28050066 +0000 UTC m=+0.121378507 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:35:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:25.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:25 np0005539552 nova_compute[233724]: 2025-11-29 08:35:25.892 233728 DEBUG oslo_concurrency.lockutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:25 np0005539552 nova_compute[233724]: 2025-11-29 08:35:25.893 233728 DEBUG oslo_concurrency.lockutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:25 np0005539552 nova_compute[233724]: 2025-11-29 08:35:25.916 233728 DEBUG nova.objects.instance [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:25 np0005539552 nova_compute[233724]: 2025-11-29 08:35:25.957 233728 DEBUG oslo_concurrency.lockutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.319 233728 DEBUG oslo_concurrency.lockutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.320 233728 DEBUG oslo_concurrency.lockutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.320 233728 INFO nova.compute.manager [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Attaching volume 2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6 to /dev/vdb#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.411 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.484 233728 DEBUG os_brick.utils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.486 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.506 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.506 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[adbe7fe6-4b9b-4271-8b73-a4372a1f5a90]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.509 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.519 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.520 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[31bb608e-d33f-4494-8841-623489c23a37]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.522 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.533 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.534 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[d29067c1-f98a-4fdd-a083-467403ff514d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.536 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[127b5f19-7d8a-4cca-a1f6-368657b43673]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.537 233728 DEBUG oslo_concurrency.processutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.575 233728 DEBUG oslo_concurrency.processutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.578 233728 DEBUG os_brick.initiator.connectors.lightos [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.579 233728 DEBUG os_brick.initiator.connectors.lightos [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.579 233728 DEBUG os_brick.initiator.connectors.lightos [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.580 233728 DEBUG os_brick.utils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.580 233728 DEBUG nova.virt.block_device [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating existing volume attachment record: 5353f30b-066b-4573-9b2e-963b5db7f711 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:35:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:26 np0005539552 nova_compute[233724]: 2025-11-29 08:35:26.938 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.268 233728 DEBUG nova.objects.instance [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.296 233728 DEBUG nova.virt.libvirt.driver [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Attempting to attach volume 2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.299 233728 DEBUG nova.virt.libvirt.guest [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6">
Nov 29 03:35:27 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:35:27 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:35:27 np0005539552 nova_compute[233724]:  <serial>2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6</serial>
Nov 29 03:35:27 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:35:27 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.441 233728 DEBUG nova.virt.libvirt.driver [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.444 233728 DEBUG nova.virt.libvirt.driver [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.444 233728 DEBUG nova.virt.libvirt.driver [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.444 233728 DEBUG nova.virt.libvirt.driver [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:f3:44:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:35:27 np0005539552 nova_compute[233724]: 2025-11-29 08:35:27.676 233728 DEBUG oslo_concurrency.lockutils [None req-25423bdc-124d-469c-8284-2cc8dd70919e bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:27.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:28.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 03:35:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:30.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:31 np0005539552 nova_compute[233724]: 2025-11-29 08:35:31.415 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3438126296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3438126296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:31.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:31 np0005539552 nova_compute[233724]: 2025-11-29 08:35:31.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:32.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:33.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:35:35Z|00799|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539552 nova_compute[233724]: 2025-11-29 08:35:35.156 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539552 nova_compute[233724]: 2025-11-29 08:35:35.788 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:35 np0005539552 nova_compute[233724]: 2025-11-29 08:35:35.822 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Triggering sync for uuid ce8d0714-58a3-470e-bd4e-056510ea90cd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:35:35 np0005539552 nova_compute[233724]: 2025-11-29 08:35:35.823 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539552 nova_compute[233724]: 2025-11-29 08:35:35.824 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539552 nova_compute[233724]: 2025-11-29 08:35:35.853 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:35.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:36 np0005539552 nova_compute[233724]: 2025-11-29 08:35:36.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:36.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:36 np0005539552 nova_compute[233724]: 2025-11-29 08:35:36.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:37.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.910 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:35:37 np0005539552 nova_compute[233724]: 2025-11-29 08:35:37.945 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/875590461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.444 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.549 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.549 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.549 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:38.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.716 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.718 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4050MB free_disk=20.853771209716797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.718 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.719 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.852 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance ce8d0714-58a3-470e-bd4e-056510ea90cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.852 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:35:38 np0005539552 nova_compute[233724]: 2025-11-29 08:35:38.853 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578082272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578082272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:39 np0005539552 nova_compute[233724]: 2025-11-29 08:35:39.098 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3309131993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:39 np0005539552 nova_compute[233724]: 2025-11-29 08:35:39.508 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:39 np0005539552 nova_compute[233724]: 2025-11-29 08:35:39.518 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:39 np0005539552 nova_compute[233724]: 2025-11-29 08:35:39.538 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:39 np0005539552 nova_compute[233724]: 2025-11-29 08:35:39.564 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:35:39 np0005539552 nova_compute[233724]: 2025-11-29 08:35:39.565 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:39.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 03:35:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:41 np0005539552 nova_compute[233724]: 2025-11-29 08:35:41.422 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:41.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:41 np0005539552 nova_compute[233724]: 2025-11-29 08:35:41.944 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:43.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:43 np0005539552 nova_compute[233724]: 2025-11-29 08:35:43.985 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:44 np0005539552 nova_compute[233724]: 2025-11-29 08:35:44.566 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:44 np0005539552 nova_compute[233724]: 2025-11-29 08:35:44.567 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:35:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:44.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:44 np0005539552 nova_compute[233724]: 2025-11-29 08:35:44.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539552 nova_compute[233724]: 2025-11-29 08:35:46.424 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539552 nova_compute[233724]: 2025-11-29 08:35:46.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:46 np0005539552 nova_compute[233724]: 2025-11-29 08:35:46.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:46 np0005539552 nova_compute[233724]: 2025-11-29 08:35:46.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:47 np0005539552 nova_compute[233724]: 2025-11-29 08:35:47.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:47 np0005539552 nova_compute[233724]: 2025-11-29 08:35:47.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:48.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 03:35:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:49.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:50.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:50 np0005539552 nova_compute[233724]: 2025-11-29 08:35:50.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:51 np0005539552 nova_compute[233724]: 2025-11-29 08:35:51.426 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:35:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 63K writes, 252K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 58.23 MB, 0.10 MB/s#012Interval WAL: 14K writes, 5564 syncs, 2.58 writes per sync, written: 0.06 GB, 0.10 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:35:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:51.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:51 np0005539552 nova_compute[233724]: 2025-11-29 08:35:51.950 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:52 np0005539552 nova_compute[233724]: 2025-11-29 08:35:52.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:52 np0005539552 nova_compute[233724]: 2025-11-29 08:35:52.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:35:52 np0005539552 nova_compute[233724]: 2025-11-29 08:35:52.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:35:53 np0005539552 nova_compute[233724]: 2025-11-29 08:35:53.170 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:53 np0005539552 nova_compute[233724]: 2025-11-29 08:35:53.170 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:53 np0005539552 nova_compute[233724]: 2025-11-29 08:35:53.171 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:35:53 np0005539552 nova_compute[233724]: 2025-11-29 08:35:53.171 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:54.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:35:55Z|00800|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:35:55 np0005539552 nova_compute[233724]: 2025-11-29 08:35:55.697 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:55.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:56 np0005539552 nova_compute[233724]: 2025-11-29 08:35:56.427 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:56.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:56 np0005539552 podman[306619]: 2025-11-29 08:35:56.910680919 +0000 UTC m=+1.184785220 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:56 np0005539552 podman[306618]: 2025-11-29 08:35:56.915682093 +0000 UTC m=+1.189668210 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:35:56 np0005539552 podman[306620]: 2025-11-29 08:35:56.939693389 +0000 UTC m=+1.198489987 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:35:56 np0005539552 nova_compute[233724]: 2025-11-29 08:35:56.951 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:58.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.327860) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359327902, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1117, "num_deletes": 256, "total_data_size": 2278123, "memory_usage": 2312928, "flush_reason": "Manual Compaction"}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359336536, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 1492367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59859, "largest_seqno": 60971, "table_properties": {"data_size": 1487202, "index_size": 2625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10711, "raw_average_key_size": 18, "raw_value_size": 1476740, "raw_average_value_size": 2586, "num_data_blocks": 114, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405286, "oldest_key_time": 1764405286, "file_creation_time": 1764405359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 8715 microseconds, and 3781 cpu microseconds.
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.336572) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 1492367 bytes OK
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.336588) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.338114) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.338125) EVENT_LOG_v1 {"time_micros": 1764405359338121, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.338138) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2272580, prev total WAL file size 2276773, number of live WAL files 2.
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.338711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(1457KB)], [117(12MB)]
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359338748, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14286355, "oldest_snapshot_seqno": -1}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 9264 keys, 13172989 bytes, temperature: kUnknown
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359432143, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 13172989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13111812, "index_size": 36937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 242549, "raw_average_key_size": 26, "raw_value_size": 12947402, "raw_average_value_size": 1397, "num_data_blocks": 1421, "num_entries": 9264, "num_filter_entries": 9264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.432383) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 13172989 bytes
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433665) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.9 rd, 140.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(18.4) write-amplify(8.8) OK, records in: 9795, records dropped: 531 output_compression: NoCompression
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433680) EVENT_LOG_v1 {"time_micros": 1764405359433673, "job": 74, "event": "compaction_finished", "compaction_time_micros": 93459, "compaction_time_cpu_micros": 30438, "output_level": 6, "num_output_files": 1, "total_output_size": 13172989, "num_input_records": 9795, "num_output_records": 9264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359434017, "job": 74, "event": "table_file_deletion", "file_number": 119}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359436264, "job": 74, "event": "table_file_deletion", "file_number": 117}
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.338643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.436377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.436384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.436386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.436388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:35:59.436390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:35:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:59.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:00.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:01 np0005539552 nova_compute[233724]: 2025-11-29 08:36:01.130 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating instance_info_cache with network_info: [{"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:01 np0005539552 nova_compute[233724]: 2025-11-29 08:36:01.167 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-ce8d0714-58a3-470e-bd4e-056510ea90cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:01 np0005539552 nova_compute[233724]: 2025-11-29 08:36:01.167 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:36:01 np0005539552 nova_compute[233724]: 2025-11-29 08:36:01.430 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:01.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:01 np0005539552 nova_compute[233724]: 2025-11-29 08:36:01.953 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:02.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:04.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:05.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:06 np0005539552 nova_compute[233724]: 2025-11-29 08:36:06.432 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:06.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:06 np0005539552 nova_compute[233724]: 2025-11-29 08:36:06.957 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:08 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:08.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:10.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:11 np0005539552 nova_compute[233724]: 2025-11-29 08:36:11.435 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:36:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:36:11 np0005539552 nova_compute[233724]: 2025-11-29 08:36:11.960 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.533 233728 DEBUG oslo_concurrency.lockutils [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.534 233728 DEBUG oslo_concurrency.lockutils [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.553 233728 INFO nova.compute.manager [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Detaching volume 2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6#033[00m
Nov 29 03:36:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:12.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.837 233728 INFO nova.virt.block_device [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Attempting to driver detach volume 2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6 from mountpoint /dev/vdb#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.851 233728 DEBUG nova.virt.libvirt.driver [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Attempting to detach device vdb from instance ce8d0714-58a3-470e-bd4e-056510ea90cd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.852 233728 DEBUG nova.virt.libvirt.guest [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6">
Nov 29 03:36:12 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <serial>2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6</serial>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:36:12 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.861 233728 INFO nova.virt.libvirt.driver [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully detached device vdb from instance ce8d0714-58a3-470e-bd4e-056510ea90cd from the persistent domain config.#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.861 233728 DEBUG nova.virt.libvirt.driver [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ce8d0714-58a3-470e-bd4e-056510ea90cd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.862 233728 DEBUG nova.virt.libvirt.guest [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6">
Nov 29 03:36:12 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <serial>2f39efc2-fe1b-41a9-96e3-2aca65cf0fa6</serial>
Nov 29 03:36:12 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:36:12 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:36:12 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.993 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405372.9931874, ce8d0714-58a3-470e-bd4e-056510ea90cd => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:36:12 np0005539552 nova_compute[233724]: 2025-11-29 08:36:12.997 233728 DEBUG nova.virt.libvirt.driver [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ce8d0714-58a3-470e-bd4e-056510ea90cd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:36:13 np0005539552 nova_compute[233724]: 2025-11-29 08:36:13.000 233728 INFO nova.virt.libvirt.driver [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully detached device vdb from instance ce8d0714-58a3-470e-bd4e-056510ea90cd from the live domain config.#033[00m
Nov 29 03:36:13 np0005539552 nova_compute[233724]: 2025-11-29 08:36:13.215 233728 DEBUG nova.objects.instance [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:13 np0005539552 nova_compute[233724]: 2025-11-29 08:36:13.260 233728 DEBUG oslo_concurrency.lockutils [None req-22b0cc19-cb52-4be7-ab9e-aa7e57441578 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.060 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.061 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.062 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.062 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.063 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.065 233728 INFO nova.compute.manager [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Terminating instance#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.066 233728 DEBUG nova.compute.manager [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:36:14 np0005539552 kernel: tapa49400ac-d3 (unregistering): left promiscuous mode
Nov 29 03:36:14 np0005539552 NetworkManager[48926]: <info>  [1764405374.1312] device (tapa49400ac-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.136 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:14Z|00801|binding|INFO|Releasing lport a49400ac-d3dd-4f0c-818e-9de880a48505 from this chassis (sb_readonly=0)
Nov 29 03:36:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:14Z|00802|binding|INFO|Setting lport a49400ac-d3dd-4f0c-818e-9de880a48505 down in Southbound
Nov 29 03:36:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:14Z|00803|binding|INFO|Removing iface tapa49400ac-d3 ovn-installed in OVS
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.138 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.142 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:44:de 10.100.0.3'], port_security=['fa:16:3e:f3:44:de 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce8d0714-58a3-470e-bd4e-056510ea90cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a027b644-782d-4129-90c4-f9aea2a9db99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a49400ac-d3dd-4f0c-818e-9de880a48505) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.144 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a49400ac-d3dd-4f0c-818e-9de880a48505 in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b unbound from our chassis#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.145 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.146 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6c60d5-69e3-4c93-acea-232d958f8682]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.147 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace which is not needed anymore#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.154 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Nov 29 03:36:14 np0005539552 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ac.scope: Consumed 18.042s CPU time.
Nov 29 03:36:14 np0005539552 systemd-machined[196379]: Machine qemu-81-instance-000000ac terminated.
Nov 29 03:36:14 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [NOTICE]   (306131) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:14 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [NOTICE]   (306131) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:14 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [WARNING]  (306131) : Exiting Master process...
Nov 29 03:36:14 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [WARNING]  (306131) : Exiting Master process...
Nov 29 03:36:14 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [ALERT]    (306131) : Current worker (306137) exited with code 143 (Terminated)
Nov 29 03:36:14 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[306126]: [WARNING]  (306131) : All workers exited. Exiting... (0)
Nov 29 03:36:14 np0005539552 systemd[1]: libpod-b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5.scope: Deactivated successfully.
Nov 29 03:36:14 np0005539552 podman[306929]: 2025-11-29 08:36:14.281684913 +0000 UTC m=+0.045500125 container died b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:36:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f7c5c1d2b0c81674a21cd704363954ad3e14e9f06dccaed0b65d9feb3829e913-merged.mount: Deactivated successfully.
Nov 29 03:36:14 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.321 233728 INFO nova.virt.libvirt.driver [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Instance destroyed successfully.#033[00m
Nov 29 03:36:14 np0005539552 podman[306929]: 2025-11-29 08:36:14.321915126 +0000 UTC m=+0.085730348 container cleanup b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.321 233728 DEBUG nova.objects.instance [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'resources' on Instance uuid ce8d0714-58a3-470e-bd4e-056510ea90cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.335 233728 DEBUG nova.virt.libvirt.vif [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-783047069',display_name='tempest-AttachVolumeNegativeTest-server-783047069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-783047069',id=172,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ1Xc+6dsBZceum/czMPofdd4FTNhf/ndQpYyoZjcVG+jFJxNhiOKOiwnZLYlv/os34IcmoJyaVTOwUT0/a2YuvnygYYqx4uGxp3ffKh3qL1focM/X4J2iBlrp5VjFccQ==',key_name='tempest-keypair-1947412987',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-hyq0lk0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=ce8d0714-58a3-470e-bd4e-056510ea90cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.336 233728 DEBUG nova.network.os_vif_util [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "a49400ac-d3dd-4f0c-818e-9de880a48505", "address": "fa:16:3e:f3:44:de", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49400ac-d3", "ovs_interfaceid": "a49400ac-d3dd-4f0c-818e-9de880a48505", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.337 233728 DEBUG nova.network.os_vif_util [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.337 233728 DEBUG os_vif [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.339 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.339 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49400ac-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.341 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 systemd[1]: libpod-conmon-b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5.scope: Deactivated successfully.
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.343 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.345 233728 INFO os_vif [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:44:de,bridge_name='br-int',has_traffic_filtering=True,id=a49400ac-d3dd-4f0c-818e-9de880a48505,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49400ac-d3')#033[00m
Nov 29 03:36:14 np0005539552 podman[306968]: 2025-11-29 08:36:14.384679335 +0000 UTC m=+0.038444626 container remove b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.390 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[262ca525-d932-4e16-a1af-ae6f02bfa7a8]: (4, ('Sat Nov 29 08:36:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5)\nb2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5\nSat Nov 29 08:36:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (b2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5)\nb2fff58b40d2afaa4a7f348feac3c04fcb00cb8a0659e44ffff72583aab39dd5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.392 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[91fb15b8-c420-462f-9c7a-363d14ecac27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.393 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.395 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 kernel: tap0183ad73-00: left promiscuous mode
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.397 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.401 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b0607e87-464b-4172-849b-9b65e8323264]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.412 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.427 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[47cc7d2e-7a80-466c-96e4-82d1aa5176ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.428 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f1db1545-c998-4dc6-9069-651f65c7f5d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.449 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f9520ca1-fa06-41d4-afe7-2f48cf8ba752]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833400, 'reachable_time': 33967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307002, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 systemd[1]: run-netns-ovnmeta\x2d0183ad73\x2d05c1\x2d46e4\x2dba3e\x2db87d7a948c3b.mount: Deactivated successfully.
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.453 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:14.453 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f11cd6bd-b057-4977-899a-3b07cc001f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:14.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.855 233728 INFO nova.virt.libvirt.driver [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Deleting instance files /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd_del#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.856 233728 INFO nova.virt.libvirt.driver [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Deletion of /var/lib/nova/instances/ce8d0714-58a3-470e-bd4e-056510ea90cd_del complete#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.922 233728 INFO nova.compute.manager [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.923 233728 DEBUG oslo.service.loopingcall [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.923 233728 DEBUG nova.compute.manager [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:36:14 np0005539552 nova_compute[233724]: 2025-11-29 08:36:14.923 233728 DEBUG nova.network.neutron [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:36:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.773 233728 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-vif-unplugged-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.773 233728 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.773 233728 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.774 233728 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.774 233728 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] No waiting events found dispatching network-vif-unplugged-a49400ac-d3dd-4f0c-818e-9de880a48505 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.774 233728 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-vif-unplugged-a49400ac-d3dd-4f0c-818e-9de880a48505 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.775 233728 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.775 233728 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.775 233728 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.775 233728 DEBUG oslo_concurrency.lockutils [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.775 233728 DEBUG nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] No waiting events found dispatching network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:15 np0005539552 nova_compute[233724]: 2025-11-29 08:36:15.776 233728 WARNING nova.compute.manager [req-6bbd8b6c-56de-4472-a980-e6b6f4cd6267 req-440b5c25-5dde-4304-8d6c-5f287ae130af 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received unexpected event network-vif-plugged-a49400ac-d3dd-4f0c-818e-9de880a48505 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:36:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:15.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.446 233728 DEBUG nova.network.neutron [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.473 233728 INFO nova.compute.manager [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Took 1.55 seconds to deallocate network for instance.#033[00m
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.516 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.517 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.582 233728 DEBUG oslo_concurrency.processutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.630 233728 DEBUG nova.compute.manager [req-4ba4e184-f63c-4b8b-9ea6-e35112635602 req-c41398c0-8161-4cbc-b89a-3348dbe3cd9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Received event network-vif-deleted-a49400ac-d3dd-4f0c-818e-9de880a48505 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:16.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:16 np0005539552 nova_compute[233724]: 2025-11-29 08:36:16.962 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3941256038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:17 np0005539552 nova_compute[233724]: 2025-11-29 08:36:17.064 233728 DEBUG oslo_concurrency.processutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:17 np0005539552 nova_compute[233724]: 2025-11-29 08:36:17.073 233728 DEBUG nova.compute.provider_tree [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:17 np0005539552 nova_compute[233724]: 2025-11-29 08:36:17.106 233728 DEBUG nova.scheduler.client.report [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:17 np0005539552 nova_compute[233724]: 2025-11-29 08:36:17.133 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:17 np0005539552 nova_compute[233724]: 2025-11-29 08:36:17.166 233728 INFO nova.scheduler.client.report [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Deleted allocations for instance ce8d0714-58a3-470e-bd4e-056510ea90cd#033[00m
Nov 29 03:36:17 np0005539552 nova_compute[233724]: 2025-11-29 08:36:17.241 233728 DEBUG oslo_concurrency.lockutils [None req-cb5225a4-f878-49e2-9853-104e8f1eed59 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "ce8d0714-58a3-470e-bd4e-056510ea90cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:17.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:18.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:19 np0005539552 nova_compute[233724]: 2025-11-29 08:36:19.343 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:19.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:20 np0005539552 nova_compute[233724]: 2025-11-29 08:36:20.368 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:20.641 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:20.642 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:20.642 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:20.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:21.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:21 np0005539552 nova_compute[233724]: 2025-11-29 08:36:21.963 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:22.295 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:22 np0005539552 nova_compute[233724]: 2025-11-29 08:36:22.296 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:22.297 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:36:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:22.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:24 np0005539552 nova_compute[233724]: 2025-11-29 08:36:24.346 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 03:36:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:24.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:25.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:26 np0005539552 nova_compute[233724]: 2025-11-29 08:36:26.967 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.393 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.729 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.730 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.756 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.845 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.846 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.855 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.855 233728 INFO nova.compute.claims [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:36:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:27 np0005539552 nova_compute[233724]: 2025-11-29 08:36:27.975 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:28 np0005539552 podman[307086]: 2025-11-29 08:36:28.015515959 +0000 UTC m=+0.089698174 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:28 np0005539552 podman[307085]: 2025-11-29 08:36:28.023579876 +0000 UTC m=+0.099885688 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:36:28 np0005539552 podman[307087]: 2025-11-29 08:36:28.068795123 +0000 UTC m=+0.137276564 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:36:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3404587930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.463 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.472 233728 DEBUG nova.compute.provider_tree [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.496 233728 DEBUG nova.scheduler.client.report [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.521 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.522 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:36:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.758 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.759 233728 DEBUG nova.network.neutron [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.798 233728 INFO nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.831 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.974 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.975 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:36:28 np0005539552 nova_compute[233724]: 2025-11-29 08:36:28.977 233728 INFO nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Creating image(s)#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.008 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.035 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.064 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.068 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.105 233728 DEBUG nova.policy [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdbcdbdc435844ee8d866288c969331b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368e3a44279843f5947188dd045d65b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.164 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.165 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.166 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.167 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.194 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.198 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9722353e-1f13-4d75-97a7-9b251f9385a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.319 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405374.3178706, ce8d0714-58a3-470e-bd4e-056510ea90cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.320 233728 INFO nova.compute.manager [-] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.345 233728 DEBUG nova.compute.manager [None req-1086db32-4a34-421f-88da-4c25c649de7d - - - - - -] [instance: ce8d0714-58a3-470e-bd4e-056510ea90cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.348 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.852 233728 DEBUG nova.network.neutron [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Successfully created port: 2280449f-eaa4-4191-81e7-63f3558de392 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.881 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9722353e-1f13-4d75-97a7-9b251f9385a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:29 np0005539552 nova_compute[233724]: 2025-11-29 08:36:29.955 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] resizing rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:36:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:29.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.074 233728 DEBUG nova.objects.instance [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.092 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.092 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Ensure instance console log exists: /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.093 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.093 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.093 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:30.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.738 233728 DEBUG nova.network.neutron [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Successfully updated port: 2280449f-eaa4-4191-81e7-63f3558de392 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.752 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.752 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquired lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.752 233728 DEBUG nova.network.neutron [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.850 233728 DEBUG nova.compute.manager [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-changed-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.851 233728 DEBUG nova.compute.manager [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Refreshing instance network info cache due to event network-changed-2280449f-eaa4-4191-81e7-63f3558de392. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.851 233728 DEBUG oslo_concurrency.lockutils [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.939 233728 DEBUG nova.network.neutron [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:36:30 np0005539552 nova_compute[233724]: 2025-11-29 08:36:30.991 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:36:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:36:31 np0005539552 nova_compute[233724]: 2025-11-29 08:36:31.969 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.026 233728 DEBUG nova.network.neutron [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updating instance_info_cache with network_info: [{"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.182 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Releasing lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.183 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Instance network_info: |[{"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.184 233728 DEBUG oslo_concurrency.lockutils [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.184 233728 DEBUG nova.network.neutron [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Refreshing network info cache for port 2280449f-eaa4-4191-81e7-63f3558de392 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.187 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Start _get_guest_xml network_info=[{"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.192 233728 WARNING nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.200 233728 DEBUG nova.virt.libvirt.host [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.202 233728 DEBUG nova.virt.libvirt.host [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.209 233728 DEBUG nova.virt.libvirt.host [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.210 233728 DEBUG nova.virt.libvirt.host [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.212 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.212 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.213 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.213 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.213 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.213 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.214 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.214 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.214 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.214 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.215 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.215 233728 DEBUG nova.virt.hardware [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.219 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:32 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:32.299 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:32.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1346530368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.705 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.754 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:32 np0005539552 nova_compute[233724]: 2025-11-29 08:36:32.761 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4118169519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.211 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.214 233728 DEBUG nova.virt.libvirt.vif [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-342901945',display_name='tempest-AttachVolumeNegativeTest-server-342901945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-342901945',id=177,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArE1d/5p2l9lrEA9StKuKY59EldtG1oon4YLNeDGDwtrrZQXe7zoIorTOaPP76ZsCNpa+vU4LaA/PNGnyxd94M3NKP27HcQPtp5G4oihlp7zGzefgsxSVuA3cHAMabcKg==',key_name='tempest-keypair-1868622340',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-90dr7fk2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=9722353e-1f13-4d75-97a7-9b251f9385a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.215 233728 DEBUG nova.network.os_vif_util [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.216 233728 DEBUG nova.network.os_vif_util [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.218 233728 DEBUG nova.objects.instance [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.238 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <uuid>9722353e-1f13-4d75-97a7-9b251f9385a6</uuid>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <name>instance-000000b1</name>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:name>tempest-AttachVolumeNegativeTest-server-342901945</nova:name>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:36:32</nova:creationTime>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:user uuid="bdbcdbdc435844ee8d866288c969331b">tempest-AttachVolumeNegativeTest-1895715059-project-member</nova:user>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:project uuid="368e3a44279843f5947188dd045d65b6">tempest-AttachVolumeNegativeTest-1895715059</nova:project>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <nova:port uuid="2280449f-eaa4-4191-81e7-63f3558de392">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <entry name="serial">9722353e-1f13-4d75-97a7-9b251f9385a6</entry>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <entry name="uuid">9722353e-1f13-4d75-97a7-9b251f9385a6</entry>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9722353e-1f13-4d75-97a7-9b251f9385a6_disk">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/9722353e-1f13-4d75-97a7-9b251f9385a6_disk.config">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:f3:a3:f2"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <target dev="tap2280449f-ea"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/console.log" append="off"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:36:33 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:36:33 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:36:33 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:36:33 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.241 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Preparing to wait for external event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.241 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.242 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.242 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.243 233728 DEBUG nova.virt.libvirt.vif [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-342901945',display_name='tempest-AttachVolumeNegativeTest-server-342901945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-342901945',id=177,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArE1d/5p2l9lrEA9StKuKY59EldtG1oon4YLNeDGDwtrrZQXe7zoIorTOaPP76ZsCNpa+vU4LaA/PNGnyxd94M3NKP27HcQPtp5G4oihlp7zGzefgsxSVuA3cHAMabcKg==',key_name='tempest-keypair-1868622340',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-90dr7fk2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=9722353e-1f13-4d75-97a7-9b251f9385a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.243 233728 DEBUG nova.network.os_vif_util [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.244 233728 DEBUG nova.network.os_vif_util [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.245 233728 DEBUG os_vif [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.246 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.247 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.248 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.253 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.253 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2280449f-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.254 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2280449f-ea, col_values=(('external_ids', {'iface-id': '2280449f-eaa4-4191-81e7-63f3558de392', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:a3:f2', 'vm-uuid': '9722353e-1f13-4d75-97a7-9b251f9385a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:33 np0005539552 NetworkManager[48926]: <info>  [1764405393.2594] manager: (tap2280449f-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.261 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.264 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.266 233728 INFO os_vif [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea')#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.329 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.331 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.331 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:f3:a3:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.332 233728 INFO nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Using config drive#033[00m
Nov 29 03:36:33 np0005539552 nova_compute[233724]: 2025-11-29 08:36:33.374 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 03:36:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.036 233728 INFO nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Creating config drive at /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/disk.config#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.048 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplyqjdvax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.202 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplyqjdvax" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.246 233728 DEBUG nova.storage.rbd_utils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 9722353e-1f13-4d75-97a7-9b251f9385a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.250 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/disk.config 9722353e-1f13-4d75-97a7-9b251f9385a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.298 233728 DEBUG nova.network.neutron [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updated VIF entry in instance network info cache for port 2280449f-eaa4-4191-81e7-63f3558de392. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.299 233728 DEBUG nova.network.neutron [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updating instance_info_cache with network_info: [{"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.324 233728 DEBUG oslo_concurrency.lockutils [req-0cb8e782-90e6-455c-84b8-da2fcfe997f0 req-e09cc117-e53e-421a-928a-76af582505bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.416 233728 DEBUG oslo_concurrency.processutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/disk.config 9722353e-1f13-4d75-97a7-9b251f9385a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.417 233728 INFO nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Deleting local config drive /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6/disk.config because it was imported into RBD.#033[00m
Nov 29 03:36:34 np0005539552 kernel: tap2280449f-ea: entered promiscuous mode
Nov 29 03:36:34 np0005539552 NetworkManager[48926]: <info>  [1764405394.4950] manager: (tap2280449f-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 29 03:36:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:34Z|00804|binding|INFO|Claiming lport 2280449f-eaa4-4191-81e7-63f3558de392 for this chassis.
Nov 29 03:36:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:34Z|00805|binding|INFO|2280449f-eaa4-4191-81e7-63f3558de392: Claiming fa:16:3e:f3:a3:f2 10.100.0.4
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.506 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:a3:f2 10.100.0.4'], port_security=['fa:16:3e:f3:a3:f2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9722353e-1f13-4d75-97a7-9b251f9385a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f63d38e-3a9d-4555-b85a-93753f7bb1d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2280449f-eaa4-4191-81e7-63f3558de392) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.510 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2280449f-eaa4-4191-81e7-63f3558de392 in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b bound to our chassis#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.515 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b#033[00m
Nov 29 03:36:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:34Z|00806|binding|INFO|Setting lport 2280449f-eaa4-4191-81e7-63f3558de392 ovn-installed in OVS
Nov 29 03:36:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:34Z|00807|binding|INFO|Setting lport 2280449f-eaa4-4191-81e7-63f3558de392 up in Southbound
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.536 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6cce0caf-455f-4559-bca2-d36dd6bde181]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.538 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0183ad73-01 in ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.539 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.542 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0183ad73-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c61c3026-4027-4f2a-ab55-6f088286e51b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.543 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6e551f9d-fad9-4996-869c-3567848c1f0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 systemd-udevd[307473]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:34 np0005539552 NetworkManager[48926]: <info>  [1764405394.5603] device (tap2280449f-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:34 np0005539552 NetworkManager[48926]: <info>  [1764405394.5616] device (tap2280449f-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.558 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[66b92349-8b0b-4913-9ba0-321c34daec42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 systemd-machined[196379]: New machine qemu-82-instance-000000b1.
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.578 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8108c8d3-c5b9-4e48-bf5b-5efac1033adf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 systemd[1]: Started Virtual Machine qemu-82-instance-000000b1.
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.616 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[90bff3d2-09ba-4c11-9f88-ba7de828bf86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 NetworkManager[48926]: <info>  [1764405394.6275] manager: (tap0183ad73-00): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.627 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d13d5db2-db49-4e37-913f-24e057d0cced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.679 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5e38f549-1a3a-4df2-816f-ee67ed5b7ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.683 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e7fa1d-c00f-4864-9dfd-f801de2e5644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 NetworkManager[48926]: <info>  [1764405394.7149] device (tap0183ad73-00): carrier: link connected
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.726 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1eed0f-c10b-4461-9542-264a3e65e683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.753 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[93e9ec2c-6942-4caa-8484-84bd713cdb60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844040, 'reachable_time': 30949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307506, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.782 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[abb42d8f-363c-411e-bb29-91c234868059]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:aad0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844040, 'tstamp': 844040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307507, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.809 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8094596e-dc8a-4528-8521-693f931b182c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844040, 'reachable_time': 30949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307508, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.862 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7281d0-eff4-49c9-912e-9e4a841b08a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.945 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[624b48db-cdc3-44bf-9bd9-2990e1857b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.947 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.947 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.947 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0183ad73-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:34 np0005539552 NetworkManager[48926]: <info>  [1764405394.9505] manager: (tap0183ad73-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Nov 29 03:36:34 np0005539552 kernel: tap0183ad73-00: entered promiscuous mode
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.949 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.952 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.957 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0183ad73-00, col_values=(('external_ids', {'iface-id': 'c88b07d7-f4c8-49a1-9950-8275afef03b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.958 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:34Z|00808|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.959 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.960 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.961 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[51143aed-9917-4265-9204-438a1922a81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.962 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:36:34.963 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'env', 'PROCESS_TAG=haproxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:34 np0005539552 nova_compute[233724]: 2025-11-29 08:36:34.972 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:35 np0005539552 podman[307541]: 2025-11-29 08:36:35.389154343 +0000 UTC m=+0.065441592 container create 3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:36:35 np0005539552 systemd[1]: Started libpod-conmon-3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628.scope.
Nov 29 03:36:35 np0005539552 podman[307541]: 2025-11-29 08:36:35.360489962 +0000 UTC m=+0.036777231 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.456 233728 DEBUG nova.compute.manager [req-e292b09b-6085-4a97-a5e5-f0c8bd3ea6c3 req-10910ce3-1cb4-4d2e-9d5e-23b8ae469b2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.457 233728 DEBUG oslo_concurrency.lockutils [req-e292b09b-6085-4a97-a5e5-f0c8bd3ea6c3 req-10910ce3-1cb4-4d2e-9d5e-23b8ae469b2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.458 233728 DEBUG oslo_concurrency.lockutils [req-e292b09b-6085-4a97-a5e5-f0c8bd3ea6c3 req-10910ce3-1cb4-4d2e-9d5e-23b8ae469b2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.458 233728 DEBUG oslo_concurrency.lockutils [req-e292b09b-6085-4a97-a5e5-f0c8bd3ea6c3 req-10910ce3-1cb4-4d2e-9d5e-23b8ae469b2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.459 233728 DEBUG nova.compute.manager [req-e292b09b-6085-4a97-a5e5-f0c8bd3ea6c3 req-10910ce3-1cb4-4d2e-9d5e-23b8ae469b2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Processing event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:36:35 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:36:35 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f661f480276b8ec14e7042693c7a272831fcee2c470f7a2615aaf0da5e6e504e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:35 np0005539552 podman[307541]: 2025-11-29 08:36:35.489287897 +0000 UTC m=+0.165575166 container init 3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:36:35 np0005539552 podman[307541]: 2025-11-29 08:36:35.499361188 +0000 UTC m=+0.175648427 container start 3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:35 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [NOTICE]   (307561) : New worker (307563) forked
Nov 29 03:36:35 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [NOTICE]   (307561) : Loading success.
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.822 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.822 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405395.8210642, 9722353e-1f13-4d75-97a7-9b251f9385a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.823 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.828 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.832 233728 INFO nova.virt.libvirt.driver [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Instance spawned successfully.#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.833 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.852 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.862 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.867 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.868 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.869 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.869 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.870 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.871 233728 DEBUG nova.virt.libvirt.driver [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.896 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.896 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405395.821474, 9722353e-1f13-4d75-97a7-9b251f9385a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.897 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.945 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.951 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405395.827188, 9722353e-1f13-4d75-97a7-9b251f9385a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.952 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.957 233728 INFO nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Took 6.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.957 233728 DEBUG nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.967 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.970 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:35 np0005539552 nova_compute[233724]: 2025-11-29 08:36:35.987 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:36 np0005539552 nova_compute[233724]: 2025-11-29 08:36:36.015 233728 INFO nova.compute.manager [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Took 8.21 seconds to build instance.#033[00m
Nov 29 03:36:36 np0005539552 nova_compute[233724]: 2025-11-29 08:36:36.033 233728 DEBUG oslo_concurrency.lockutils [None req-48a49098-4ff2-4b7a-8243-b9bcd13ea580 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:36.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:36 np0005539552 nova_compute[233724]: 2025-11-29 08:36:36.975 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:37 np0005539552 nova_compute[233724]: 2025-11-29 08:36:37.547 233728 DEBUG nova.compute.manager [req-341d58db-7b0f-47d0-a9d8-52953688cda7 req-2ba11e25-3543-4ff5-8bb7-95526bd2a6d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:37 np0005539552 nova_compute[233724]: 2025-11-29 08:36:37.547 233728 DEBUG oslo_concurrency.lockutils [req-341d58db-7b0f-47d0-a9d8-52953688cda7 req-2ba11e25-3543-4ff5-8bb7-95526bd2a6d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:37 np0005539552 nova_compute[233724]: 2025-11-29 08:36:37.548 233728 DEBUG oslo_concurrency.lockutils [req-341d58db-7b0f-47d0-a9d8-52953688cda7 req-2ba11e25-3543-4ff5-8bb7-95526bd2a6d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:37 np0005539552 nova_compute[233724]: 2025-11-29 08:36:37.548 233728 DEBUG oslo_concurrency.lockutils [req-341d58db-7b0f-47d0-a9d8-52953688cda7 req-2ba11e25-3543-4ff5-8bb7-95526bd2a6d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:37 np0005539552 nova_compute[233724]: 2025-11-29 08:36:37.548 233728 DEBUG nova.compute.manager [req-341d58db-7b0f-47d0-a9d8-52953688cda7 req-2ba11e25-3543-4ff5-8bb7-95526bd2a6d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] No waiting events found dispatching network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:37 np0005539552 nova_compute[233724]: 2025-11-29 08:36:37.548 233728 WARNING nova.compute.manager [req-341d58db-7b0f-47d0-a9d8-52953688cda7 req-2ba11e25-3543-4ff5-8bb7-95526bd2a6d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received unexpected event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:36:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:37.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.258 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:38.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.952 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.952 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.953 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.953 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:36:38 np0005539552 nova_compute[233724]: 2025-11-29 08:36:38.953 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1586273956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.447 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.531 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.532 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.672 233728 DEBUG nova.compute.manager [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-changed-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.673 233728 DEBUG nova.compute.manager [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Refreshing instance network info cache due to event network-changed-2280449f-eaa4-4191-81e7-63f3558de392. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.673 233728 DEBUG oslo_concurrency.lockutils [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.674 233728 DEBUG oslo_concurrency.lockutils [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.674 233728 DEBUG nova.network.neutron [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Refreshing network info cache for port 2280449f-eaa4-4191-81e7-63f3558de392 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.775 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.777 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4089MB free_disk=20.876419067382812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.777 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.777 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 9722353e-1f13-4d75-97a7-9b251f9385a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.947 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:36:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:39 np0005539552 nova_compute[233724]: 2025-11-29 08:36:39.980 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.001 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.001 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.021 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.041 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.092 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:40 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/38228335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.535 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.541 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.578 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.609 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:36:40 np0005539552 nova_compute[233724]: 2025-11-29 08:36:40.611 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:40.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:41 np0005539552 nova_compute[233724]: 2025-11-29 08:36:41.976 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:42.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:43 np0005539552 nova_compute[233724]: 2025-11-29 08:36:43.260 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:43 np0005539552 nova_compute[233724]: 2025-11-29 08:36:43.542 233728 DEBUG nova.network.neutron [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updated VIF entry in instance network info cache for port 2280449f-eaa4-4191-81e7-63f3558de392. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:43 np0005539552 nova_compute[233724]: 2025-11-29 08:36:43.544 233728 DEBUG nova.network.neutron [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updating instance_info_cache with network_info: [{"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:43 np0005539552 nova_compute[233724]: 2025-11-29 08:36:43.561 233728 DEBUG oslo_concurrency.lockutils [req-e52db9b0-080b-44b5-9724-b4724de3c82c req-019ad946-d9b6-4e8b-a14d-9c88b7a2c8b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.340972) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404341000, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 259, "total_data_size": 1283997, "memory_usage": 1305120, "flush_reason": "Manual Compaction"}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404348176, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 846779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60976, "largest_seqno": 61733, "table_properties": {"data_size": 843135, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8701, "raw_average_key_size": 19, "raw_value_size": 835517, "raw_average_value_size": 1856, "num_data_blocks": 63, "num_entries": 450, "num_filter_entries": 450, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405359, "oldest_key_time": 1764405359, "file_creation_time": 1764405404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 7253 microseconds, and 3141 cpu microseconds.
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.348222) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 846779 bytes OK
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.348239) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.349970) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.349981) EVENT_LOG_v1 {"time_micros": 1764405404349978, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.349996) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1279880, prev total WAL file size 1279880, number of live WAL files 2.
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.350524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323634' seq:0, type:0; will stop at (end)
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(826KB)], [120(12MB)]
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404350582, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 14019768, "oldest_snapshot_seqno": -1}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 9180 keys, 13876979 bytes, temperature: kUnknown
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404445222, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13876979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13815220, "index_size": 37711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241852, "raw_average_key_size": 26, "raw_value_size": 13651116, "raw_average_value_size": 1487, "num_data_blocks": 1450, "num_entries": 9180, "num_filter_entries": 9180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.445503) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13876979 bytes
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.447028) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 146.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(32.9) write-amplify(16.4) OK, records in: 9714, records dropped: 534 output_compression: NoCompression
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.447053) EVENT_LOG_v1 {"time_micros": 1764405404447036, "job": 76, "event": "compaction_finished", "compaction_time_micros": 94707, "compaction_time_cpu_micros": 35494, "output_level": 6, "num_output_files": 1, "total_output_size": 13876979, "num_input_records": 9714, "num_output_records": 9180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404447274, "job": 76, "event": "table_file_deletion", "file_number": 122}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404449093, "job": 76, "event": "table_file_deletion", "file_number": 120}
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.350455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.449151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.449156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.449158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.449160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:36:44.449161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:44.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:36:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604173440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:36:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:36:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604173440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:36:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:46 np0005539552 nova_compute[233724]: 2025-11-29 08:36:46.612 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:46 np0005539552 nova_compute[233724]: 2025-11-29 08:36:46.612 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:46 np0005539552 nova_compute[233724]: 2025-11-29 08:36:46.612 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:36:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:46.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:46 np0005539552 nova_compute[233724]: 2025-11-29 08:36:46.975 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:47 np0005539552 nova_compute[233724]: 2025-11-29 08:36:47.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:47 np0005539552 nova_compute[233724]: 2025-11-29 08:36:47.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:48 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:48Z|00809|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:36:48 np0005539552 nova_compute[233724]: 2025-11-29 08:36:48.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:48 np0005539552 nova_compute[233724]: 2025-11-29 08:36:48.261 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:48.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:48 np0005539552 nova_compute[233724]: 2025-11-29 08:36:48.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:48 np0005539552 nova_compute[233724]: 2025-11-29 08:36:48.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:50Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:a3:f2 10.100.0.4
Nov 29 03:36:50 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:50Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:a3:f2 10.100.0.4
Nov 29 03:36:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:50.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:50 np0005539552 nova_compute[233724]: 2025-11-29 08:36:50.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:51 np0005539552 nova_compute[233724]: 2025-11-29 08:36:51.292 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:51 np0005539552 nova_compute[233724]: 2025-11-29 08:36:51.979 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:53 np0005539552 ovn_controller[133798]: 2025-11-29T08:36:53Z|00810|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:36:53 np0005539552 nova_compute[233724]: 2025-11-29 08:36:53.263 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:53 np0005539552 nova_compute[233724]: 2025-11-29 08:36:53.315 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:54 np0005539552 nova_compute[233724]: 2025-11-29 08:36:54.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:54 np0005539552 nova_compute[233724]: 2025-11-29 08:36:54.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:36:54 np0005539552 nova_compute[233724]: 2025-11-29 08:36:54.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:36:55 np0005539552 nova_compute[233724]: 2025-11-29 08:36:55.111 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:55 np0005539552 nova_compute[233724]: 2025-11-29 08:36:55.112 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:55 np0005539552 nova_compute[233724]: 2025-11-29 08:36:55.112 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:36:55 np0005539552 nova_compute[233724]: 2025-11-29 08:36:55.112 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:56 np0005539552 nova_compute[233724]: 2025-11-29 08:36:56.400 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:56.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:56 np0005539552 nova_compute[233724]: 2025-11-29 08:36:56.982 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:57 np0005539552 nova_compute[233724]: 2025-11-29 08:36:57.088 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updating instance_info_cache with network_info: [{"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:57 np0005539552 nova_compute[233724]: 2025-11-29 08:36:57.105 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-9722353e-1f13-4d75-97a7-9b251f9385a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:57 np0005539552 nova_compute[233724]: 2025-11-29 08:36:57.105 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:36:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 03:36:58 np0005539552 nova_compute[233724]: 2025-11-29 08:36:58.265 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:36:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:58.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:59 np0005539552 podman[307724]: 2025-11-29 08:36:59.021184067 +0000 UTC m=+0.091933104 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:36:59 np0005539552 podman[307725]: 2025-11-29 08:36:59.034830274 +0000 UTC m=+0.092429537 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:36:59 np0005539552 podman[307726]: 2025-11-29 08:36:59.092644159 +0000 UTC m=+0.145089283 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:37:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:01 np0005539552 nova_compute[233724]: 2025-11-29 08:37:01.984 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:02.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:03 np0005539552 nova_compute[233724]: 2025-11-29 08:37:03.268 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:04 np0005539552 nova_compute[233724]: 2025-11-29 08:37:04.101 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:04.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:06.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:06 np0005539552 nova_compute[233724]: 2025-11-29 08:37:06.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:08 np0005539552 nova_compute[233724]: 2025-11-29 08:37:08.269 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:08.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:08.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:09 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:37:09 np0005539552 nova_compute[233724]: 2025-11-29 08:37:09.382 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:10.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:10.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:37:11 np0005539552 nova_compute[233724]: 2025-11-29 08:37:11.991 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:12.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:12.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:12 np0005539552 nova_compute[233724]: 2025-11-29 08:37:12.754 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.029 233728 DEBUG oslo_concurrency.lockutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.030 233728 DEBUG oslo_concurrency.lockutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.054 233728 DEBUG nova.objects.instance [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.097 233728 DEBUG oslo_concurrency.lockutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.272 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.403 233728 DEBUG oslo_concurrency.lockutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.404 233728 DEBUG oslo_concurrency.lockutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.405 233728 INFO nova.compute.manager [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Attaching volume a8b37ad1-e3f2-474d-a8f9-2676f112a82c to /dev/vdb#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.583 233728 DEBUG os_brick.utils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.585 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.604 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.605 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc71364-d8ba-427e-b680-badfcc9a8a02]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.606 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.619 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.619 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[0d64def0-d938-4ffb-a58d-738a8e6ddba3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.621 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.637 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.637 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f139666-99d9-43ba-9bb5-22bbf5e761ac]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.639 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[09c63d0e-e62b-4ccb-a1a3-f4b4d9d3f78d]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.640 233728 DEBUG oslo_concurrency.processutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.691 233728 DEBUG oslo_concurrency.processutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "nvme version" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.695 233728 DEBUG os_brick.initiator.connectors.lightos [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.696 233728 DEBUG os_brick.initiator.connectors.lightos [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.696 233728 DEBUG os_brick.initiator.connectors.lightos [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.697 233728 DEBUG os_brick.utils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:37:13 np0005539552 nova_compute[233724]: 2025-11-29 08:37:13.698 233728 DEBUG nova.virt.block_device [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updating existing volume attachment record: 4c453f0f-2349-4554-a07a-8bc29d94a754 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:37:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:14.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.386 233728 DEBUG nova.objects.instance [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.408 233728 DEBUG nova.virt.libvirt.driver [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Attempting to attach volume a8b37ad1-e3f2-474d-a8f9-2676f112a82c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.411 233728 DEBUG nova.virt.libvirt.guest [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-a8b37ad1-e3f2-474d-a8f9-2676f112a82c">
Nov 29 03:37:14 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:37:14 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:37:14 np0005539552 nova_compute[233724]:  <serial>a8b37ad1-e3f2-474d-a8f9-2676f112a82c</serial>
Nov 29 03:37:14 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:37:14 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.568 233728 DEBUG nova.virt.libvirt.driver [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.568 233728 DEBUG nova.virt.libvirt.driver [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.569 233728 DEBUG nova.virt.libvirt.driver [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.569 233728 DEBUG nova.virt.libvirt.driver [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:f3:a3:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:37:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:14.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:14 np0005539552 nova_compute[233724]: 2025-11-29 08:37:14.803 233728 DEBUG oslo_concurrency.lockutils [None req-7caa7bd6-0b95-43aa-9a7f-b102f12df3cd bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:16.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.589 233728 DEBUG oslo_concurrency.lockutils [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.590 233728 DEBUG oslo_concurrency.lockutils [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.607 233728 INFO nova.compute.manager [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Detaching volume a8b37ad1-e3f2-474d-a8f9-2676f112a82c#033[00m
Nov 29 03:37:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:16.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.778 233728 INFO nova.virt.block_device [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Attempting to driver detach volume a8b37ad1-e3f2-474d-a8f9-2676f112a82c from mountpoint /dev/vdb#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.786 233728 DEBUG nova.virt.libvirt.driver [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Attempting to detach device vdb from instance 9722353e-1f13-4d75-97a7-9b251f9385a6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.786 233728 DEBUG nova.virt.libvirt.guest [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-a8b37ad1-e3f2-474d-a8f9-2676f112a82c">
Nov 29 03:37:16 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <serial>a8b37ad1-e3f2-474d-a8f9-2676f112a82c</serial>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:37:16 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.794 233728 INFO nova.virt.libvirt.driver [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully detached device vdb from instance 9722353e-1f13-4d75-97a7-9b251f9385a6 from the persistent domain config.#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.794 233728 DEBUG nova.virt.libvirt.driver [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9722353e-1f13-4d75-97a7-9b251f9385a6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.795 233728 DEBUG nova.virt.libvirt.guest [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-a8b37ad1-e3f2-474d-a8f9-2676f112a82c">
Nov 29 03:37:16 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <serial>a8b37ad1-e3f2-474d-a8f9-2676f112a82c</serial>
Nov 29 03:37:16 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:37:16 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:37:16 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.918 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405436.9180403, 9722353e-1f13-4d75-97a7-9b251f9385a6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.921 233728 DEBUG nova.virt.libvirt.driver [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9722353e-1f13-4d75-97a7-9b251f9385a6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.923 233728 INFO nova.virt.libvirt.driver [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully detached device vdb from instance 9722353e-1f13-4d75-97a7-9b251f9385a6 from the live domain config.#033[00m
Nov 29 03:37:16 np0005539552 nova_compute[233724]: 2025-11-29 08:37:16.993 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 03:37:17 np0005539552 nova_compute[233724]: 2025-11-29 08:37:17.085 233728 DEBUG nova.objects.instance [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:17 np0005539552 nova_compute[233724]: 2025-11-29 08:37:17.119 233728 DEBUG oslo_concurrency.lockutils [None req-a7e14a8a-5918-4594-b933-de16b4639219 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.044 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.045 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.069 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.076 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.077 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.077 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.078 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.078 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.080 233728 INFO nova.compute.manager [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Terminating instance#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.082 233728 DEBUG nova.compute.manager [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:37:18 np0005539552 kernel: tap2280449f-ea (unregistering): left promiscuous mode
Nov 29 03:37:18 np0005539552 NetworkManager[48926]: <info>  [1764405438.1467] device (tap2280449f-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.151 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.151 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.159 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.160 233728 INFO nova.compute.claims [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.166 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:18Z|00811|binding|INFO|Releasing lport 2280449f-eaa4-4191-81e7-63f3558de392 from this chassis (sb_readonly=0)
Nov 29 03:37:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:18Z|00812|binding|INFO|Setting lport 2280449f-eaa4-4191-81e7-63f3558de392 down in Southbound
Nov 29 03:37:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:18Z|00813|binding|INFO|Removing iface tap2280449f-ea ovn-installed in OVS
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.175 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:a3:f2 10.100.0.4'], port_security=['fa:16:3e:f3:a3:f2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9722353e-1f13-4d75-97a7-9b251f9385a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f63d38e-3a9d-4555-b85a-93753f7bb1d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2280449f-eaa4-4191-81e7-63f3558de392) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.177 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2280449f-eaa4-4191-81e7-63f3558de392 in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b unbound from our chassis#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.180 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.182 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c97aca25-cdae-43ae-b162-0c4325d1142f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.182 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace which is not needed anymore#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.208 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Nov 29 03:37:18 np0005539552 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b1.scope: Consumed 16.352s CPU time.
Nov 29 03:37:18 np0005539552 systemd-machined[196379]: Machine qemu-82-instance-000000b1 terminated.
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.273 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.284 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:18.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.323 233728 INFO nova.virt.libvirt.driver [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Instance destroyed successfully.#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.325 233728 DEBUG nova.objects.instance [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'resources' on Instance uuid 9722353e-1f13-4d75-97a7-9b251f9385a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.342 233728 DEBUG nova.virt.libvirt.vif [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-342901945',display_name='tempest-AttachVolumeNegativeTest-server-342901945',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-342901945',id=177,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArE1d/5p2l9lrEA9StKuKY59EldtG1oon4YLNeDGDwtrrZQXe7zoIorTOaPP76ZsCNpa+vU4LaA/PNGnyxd94M3NKP27HcQPtp5G4oihlp7zGzefgsxSVuA3cHAMabcKg==',key_name='tempest-keypair-1868622340',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-90dr7fk2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=9722353e-1f13-4d75-97a7-9b251f9385a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.342 233728 DEBUG nova.network.os_vif_util [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "2280449f-eaa4-4191-81e7-63f3558de392", "address": "fa:16:3e:f3:a3:f2", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2280449f-ea", "ovs_interfaceid": "2280449f-eaa4-4191-81e7-63f3558de392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.344 233728 DEBUG nova.network.os_vif_util [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.346 233728 DEBUG os_vif [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.351 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.351 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2280449f-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.354 233728 DEBUG nova.compute.manager [req-80b99b4c-141b-4f1b-9bc2-46262e219756 req-384a597c-733e-442b-9b52-69f02fc944b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-vif-unplugged-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.354 233728 DEBUG oslo_concurrency.lockutils [req-80b99b4c-141b-4f1b-9bc2-46262e219756 req-384a597c-733e-442b-9b52-69f02fc944b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [NOTICE]   (307561) : haproxy version is 2.8.14-c23fe91
Nov 29 03:37:18 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [NOTICE]   (307561) : path to executable is /usr/sbin/haproxy
Nov 29 03:37:18 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [WARNING]  (307561) : Exiting Master process...
Nov 29 03:37:18 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [WARNING]  (307561) : Exiting Master process...
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.355 233728 DEBUG oslo_concurrency.lockutils [req-80b99b4c-141b-4f1b-9bc2-46262e219756 req-384a597c-733e-442b-9b52-69f02fc944b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.356 233728 DEBUG oslo_concurrency.lockutils [req-80b99b4c-141b-4f1b-9bc2-46262e219756 req-384a597c-733e-442b-9b52-69f02fc944b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.356 233728 DEBUG nova.compute.manager [req-80b99b4c-141b-4f1b-9bc2-46262e219756 req-384a597c-733e-442b-9b52-69f02fc944b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] No waiting events found dispatching network-vif-unplugged-2280449f-eaa4-4191-81e7-63f3558de392 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.356 233728 DEBUG nova.compute.manager [req-80b99b4c-141b-4f1b-9bc2-46262e219756 req-384a597c-733e-442b-9b52-69f02fc944b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-vif-unplugged-2280449f-eaa4-4191-81e7-63f3558de392 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:37:18 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [ALERT]    (307561) : Current worker (307563) exited with code 143 (Terminated)
Nov 29 03:37:18 np0005539552 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[307557]: [WARNING]  (307561) : All workers exited. Exiting... (0)
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.357 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.359 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:37:18 np0005539552 systemd[1]: libpod-3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628.scope: Deactivated successfully.
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.362 233728 INFO os_vif [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:a3:f2,bridge_name='br-int',has_traffic_filtering=True,id=2280449f-eaa4-4191-81e7-63f3558de392,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2280449f-ea')#033[00m
Nov 29 03:37:18 np0005539552 podman[308038]: 2025-11-29 08:37:18.366464045 +0000 UTC m=+0.053723857 container died 3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:37:18 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628-userdata-shm.mount: Deactivated successfully.
Nov 29 03:37:18 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f661f480276b8ec14e7042693c7a272831fcee2c470f7a2615aaf0da5e6e504e-merged.mount: Deactivated successfully.
Nov 29 03:37:18 np0005539552 podman[308038]: 2025-11-29 08:37:18.426265514 +0000 UTC m=+0.113525326 container cleanup 3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:37:18 np0005539552 systemd[1]: libpod-conmon-3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628.scope: Deactivated successfully.
Nov 29 03:37:18 np0005539552 podman[308110]: 2025-11-29 08:37:18.489864515 +0000 UTC m=+0.040330176 container remove 3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.500 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8352ca-d797-42c0-b5f9-fd25da14fa13]: (4, ('Sat Nov 29 08:37:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628)\n3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628\nSat Nov 29 08:37:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628)\n3564567f2e35552fb9dd76762e08d2096958c7cd08063da88c67935e8dc9e628\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.502 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ca928c47-af3c-4e10-b93f-8ab2bd988220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.503 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 kernel: tap0183ad73-00: left promiscuous mode
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.510 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fdbf02-b4df-437e-b364-e2e42bcde79e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.521 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.529 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8eb4c4-72eb-4c3c-84c4-eddfdbd0588a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.530 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[affd45d0-5677-46bd-ac2f-d842355665ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.549 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7d80b127-7143-496e-a500-c428aba171a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844029, 'reachable_time': 41603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308125, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 systemd[1]: run-netns-ovnmeta\x2d0183ad73\x2d05c1\x2d46e4\x2dba3e\x2db87d7a948c3b.mount: Deactivated successfully.
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.554 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:37:18 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:18.555 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[26405e69-cf4d-47da-8ad3-321addce8559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/845857750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.717 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.724 233728 DEBUG nova.compute.provider_tree [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.741 233728 DEBUG nova.scheduler.client.report [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.768 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.769 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.788 233728 INFO nova.virt.libvirt.driver [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Deleting instance files /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6_del#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.789 233728 INFO nova.virt.libvirt.driver [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Deletion of /var/lib/nova/instances/9722353e-1f13-4d75-97a7-9b251f9385a6_del complete#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.845 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.846 233728 DEBUG nova.network.neutron [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.862 233728 INFO nova.compute.manager [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.863 233728 DEBUG oslo.service.loopingcall [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.863 233728 DEBUG nova.compute.manager [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.863 233728 DEBUG nova.network.neutron [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.870 233728 INFO nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:37:18 np0005539552 nova_compute[233724]: 2025-11-29 08:37:18.894 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.006 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.007 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.008 233728 INFO nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Creating image(s)#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.043 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.082 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.118 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.123 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.223 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.224 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.225 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.225 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.261 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.266 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5748313e-fbb3-409e-83e6-aff548491530_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.313 233728 DEBUG nova.policy [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eeba34466b8f4a1bb5f742f1e811053c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '889608c71d13429fb37793575792ae74', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:37:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.636 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5748313e-fbb3-409e-83e6-aff548491530_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.732 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] resizing rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.847 233728 DEBUG nova.objects.instance [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'migration_context' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.873 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.874 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Ensure instance console log exists: /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.875 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.875 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:19 np0005539552 nova_compute[233724]: 2025-11-29 08:37:19.875 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:20.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.351 233728 DEBUG nova.network.neutron [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.387 233728 INFO nova.compute.manager [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Took 1.52 seconds to deallocate network for instance.#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.448 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.449 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.454 233728 DEBUG nova.network.neutron [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Successfully created port: 6fb79eb2-29a8-4947-8bac-7bed17841673 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.461 233728 DEBUG nova.compute.manager [req-17274296-c684-4f28-98fb-9fcbe33aeef4 req-c81dfb62-c97a-4acf-b1f9-6b2c194e9831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.461 233728 DEBUG oslo_concurrency.lockutils [req-17274296-c684-4f28-98fb-9fcbe33aeef4 req-c81dfb62-c97a-4acf-b1f9-6b2c194e9831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.462 233728 DEBUG oslo_concurrency.lockutils [req-17274296-c684-4f28-98fb-9fcbe33aeef4 req-c81dfb62-c97a-4acf-b1f9-6b2c194e9831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.462 233728 DEBUG oslo_concurrency.lockutils [req-17274296-c684-4f28-98fb-9fcbe33aeef4 req-c81dfb62-c97a-4acf-b1f9-6b2c194e9831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.462 233728 DEBUG nova.compute.manager [req-17274296-c684-4f28-98fb-9fcbe33aeef4 req-c81dfb62-c97a-4acf-b1f9-6b2c194e9831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] No waiting events found dispatching network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.462 233728 WARNING nova.compute.manager [req-17274296-c684-4f28-98fb-9fcbe33aeef4 req-c81dfb62-c97a-4acf-b1f9-6b2c194e9831 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received unexpected event network-vif-plugged-2280449f-eaa4-4191-81e7-63f3558de392 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:37:20 np0005539552 nova_compute[233724]: 2025-11-29 08:37:20.546 233728 DEBUG oslo_concurrency.processutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:20.643 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:20.643 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:20.643 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3173720601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.039 233728 DEBUG oslo_concurrency.processutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.049 233728 DEBUG nova.compute.provider_tree [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.068 233728 DEBUG nova.scheduler.client.report [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.097 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.128 233728 DEBUG nova.compute.manager [req-44c9a416-6152-44f7-8d11-80770152a0a2 req-5e3e3ff4-fd6f-43b7-aace-71956f569066 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Received event network-vif-deleted-2280449f-eaa4-4191-81e7-63f3558de392 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.140 233728 INFO nova.scheduler.client.report [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Deleted allocations for instance 9722353e-1f13-4d75-97a7-9b251f9385a6#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.209 233728 DEBUG oslo_concurrency.lockutils [None req-573cf473-9a25-40e0-b727-140a97261993 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "9722353e-1f13-4d75-97a7-9b251f9385a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.439 233728 DEBUG nova.network.neutron [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Successfully updated port: 6fb79eb2-29a8-4947-8bac-7bed17841673 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.468 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.468 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.469 233728 DEBUG nova.network.neutron [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:37:21 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.632 233728 DEBUG nova.network.neutron [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:21.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.429 233728 DEBUG nova.network.neutron [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.452 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.452 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance network_info: |[{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.455 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Start _get_guest_xml network_info=[{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.461 233728 WARNING nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.468 233728 DEBUG nova.virt.libvirt.host [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.469 233728 DEBUG nova.virt.libvirt.host [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.472 233728 DEBUG nova.virt.libvirt.host [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.473 233728 DEBUG nova.virt.libvirt.host [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.475 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.475 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.476 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.477 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.477 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.477 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.478 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.478 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.479 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.479 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.480 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.480 233728 DEBUG nova.virt.hardware [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.487 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/857264792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.923 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.957 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:22 np0005539552 nova_compute[233724]: 2025-11-29 08:37:22.960 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.354 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4265755800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.389 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.392 233728 DEBUG nova.virt.libvirt.vif [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-716574022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-716574022',id=180,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBSzclE1qHhmNtB+Dlhxw2Ps8HWdSPocEaWlsxDme1qVoS+t7CcJ/Bo6lrhqUu/ZA6JT3SxHX6WwNieCHu9AGemn9sAzHapyUGRyjBFuHCFhJn85rjzwwkttpV/QWN0gNg==',key_name='tempest-keypair-1744874454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-pwfpf00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=5748313e-fbb3-409e-83e6-aff548491530,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.393 233728 DEBUG nova.network.os_vif_util [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.395 233728 DEBUG nova.network.os_vif_util [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.397 233728 DEBUG nova.objects.instance [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.422 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <uuid>5748313e-fbb3-409e-83e6-aff548491530</uuid>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <name>instance-000000b4</name>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-716574022</nova:name>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:37:22</nova:creationTime>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:user uuid="eeba34466b8f4a1bb5f742f1e811053c">tempest-AttachVolumeShelveTestJSON-907905934-project-member</nova:user>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:project uuid="889608c71d13429fb37793575792ae74">tempest-AttachVolumeShelveTestJSON-907905934</nova:project>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <nova:port uuid="6fb79eb2-29a8-4947-8bac-7bed17841673">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <entry name="serial">5748313e-fbb3-409e-83e6-aff548491530</entry>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <entry name="uuid">5748313e-fbb3-409e-83e6-aff548491530</entry>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/5748313e-fbb3-409e-83e6-aff548491530_disk">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/5748313e-fbb3-409e-83e6-aff548491530_disk.config">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:8b:9f:50"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <target dev="tap6fb79eb2-29"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/console.log" append="off"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:37:23 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:37:23 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:37:23 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:37:23 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.424 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Preparing to wait for external event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.424 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.425 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.426 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.428 233728 DEBUG nova.virt.libvirt.vif [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-716574022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-716574022',id=180,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBSzclE1qHhmNtB+Dlhxw2Ps8HWdSPocEaWlsxDme1qVoS+t7CcJ/Bo6lrhqUu/ZA6JT3SxHX6WwNieCHu9AGemn9sAzHapyUGRyjBFuHCFhJn85rjzwwkttpV/QWN0gNg==',key_name='tempest-keypair-1744874454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-pwfpf00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=5748313e-fbb3-409e-83e6-aff548491530,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.428 233728 DEBUG nova.network.os_vif_util [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.430 233728 DEBUG nova.network.os_vif_util [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.432 233728 DEBUG os_vif [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.434 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.435 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.436 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.440 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.441 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fb79eb2-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.442 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fb79eb2-29, col_values=(('external_ids', {'iface-id': '6fb79eb2-29a8-4947-8bac-7bed17841673', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:9f:50', 'vm-uuid': '5748313e-fbb3-409e-83e6-aff548491530'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.444 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539552 NetworkManager[48926]: <info>  [1764405443.4455] manager: (tap6fb79eb2-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.454 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.455 233728 INFO os_vif [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29')#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.547 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.548 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.548 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No VIF found with MAC fa:16:3e:8b:9f:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.549 233728 INFO nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Using config drive#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.589 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.601 233728 DEBUG nova.compute.manager [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.601 233728 DEBUG nova.compute.manager [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing instance network info cache due to event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.602 233728 DEBUG oslo_concurrency.lockutils [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.602 233728 DEBUG oslo_concurrency.lockutils [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.603 233728 DEBUG nova.network.neutron [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.606 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:23.607 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:23.609 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:37:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.960 233728 INFO nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Creating config drive at /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config#033[00m
Nov 29 03:37:23 np0005539552 nova_compute[233724]: 2025-11-29 08:37:23.972 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5s7tu08m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.118 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5s7tu08m" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.165 233728 DEBUG nova.storage.rbd_utils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.170 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config 5748313e-fbb3-409e-83e6-aff548491530_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.384 233728 DEBUG oslo_concurrency.processutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config 5748313e-fbb3-409e-83e6-aff548491530_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.385 233728 INFO nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deleting local config drive /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config because it was imported into RBD.#033[00m
Nov 29 03:37:24 np0005539552 kernel: tap6fb79eb2-29: entered promiscuous mode
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.455 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:24Z|00814|binding|INFO|Claiming lport 6fb79eb2-29a8-4947-8bac-7bed17841673 for this chassis.
Nov 29 03:37:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:24Z|00815|binding|INFO|6fb79eb2-29a8-4947-8bac-7bed17841673: Claiming fa:16:3e:8b:9f:50 10.100.0.3
Nov 29 03:37:24 np0005539552 NetworkManager[48926]: <info>  [1764405444.4568] manager: (tap6fb79eb2-29): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.466 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9f:50 10.100.0.3'], port_security=['fa:16:3e:8b:9f:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748313e-fbb3-409e-83e6-aff548491530', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c266cfe3-9f5e-4a42-93d4-52df1525211e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=6fb79eb2-29a8-4947-8bac-7bed17841673) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.467 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb79eb2-29a8-4947-8bac-7bed17841673 in datapath d042f24f-c2f0-4843-9727-cc3720586596 bound to our chassis#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.469 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d042f24f-c2f0-4843-9727-cc3720586596#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.482 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3d4f1e-c2bf-4bf3-a1ee-a51187c9a1c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.482 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd042f24f-c1 in ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.485 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd042f24f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.485 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[893bbd3b-71e8-4783-9ed9-ea309916ecd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.486 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3320d7f1-f280-48cc-ad5d-fafd8cad3b4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 systemd-udevd[308505]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:24Z|00816|binding|INFO|Setting lport 6fb79eb2-29a8-4947-8bac-7bed17841673 ovn-installed in OVS
Nov 29 03:37:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:24Z|00817|binding|INFO|Setting lport 6fb79eb2-29a8-4947-8bac-7bed17841673 up in Southbound
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.499 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539552 NetworkManager[48926]: <info>  [1764405444.5067] device (tap6fb79eb2-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:37:24 np0005539552 NetworkManager[48926]: <info>  [1764405444.5080] device (tap6fb79eb2-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.507 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[fa847dc8-fb46-4359-8ccb-eae286339567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 systemd-machined[196379]: New machine qemu-83-instance-000000b4.
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.521 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc24eb8-46fa-444f-8ddd-f4336c73de0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 systemd[1]: Started Virtual Machine qemu-83-instance-000000b4.
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.556 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c76327a3-3e9e-45b1-8330-d86079b004d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 NetworkManager[48926]: <info>  [1764405444.5677] manager: (tapd042f24f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.566 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eb41c625-8db1-4d4e-8503-94f7dd393417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 systemd-udevd[308509]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.608 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b6dd80-e2e7-40d8-83b6-08d62310f7ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.612 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5f18b9db-c599-4ec2-b849-014b09824437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 NetworkManager[48926]: <info>  [1764405444.6368] device (tapd042f24f-c0): carrier: link connected
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.646 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e2537e58-c539-465b-ba23-0db9c2481842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.675 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[09bb3c29-8fd7-4caa-b82a-0f59be2ec83e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849032, 'reachable_time': 22036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308538, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.702 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c73d00-40e4-44d4-949d-b80a0e9983e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:6732'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 849032, 'tstamp': 849032}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308539, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.732 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b444ff8-0d1c-4eb1-910f-6caf896c630c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849032, 'reachable_time': 22036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308540, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.777 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d7bb6f-18cc-441c-a73b-e5e0a1fe374b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.874 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[170660be-82c6-475e-9e0d-9d39fdf32853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.876 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.876 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.877 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd042f24f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.879 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539552 kernel: tapd042f24f-c0: entered promiscuous mode
Nov 29 03:37:24 np0005539552 NetworkManager[48926]: <info>  [1764405444.8820] manager: (tapd042f24f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.885 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd042f24f-c0, col_values=(('external_ids', {'iface-id': 'e44bf17c-ebb7-4e62-850a-20ff20a74960'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.887 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.888 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:24Z|00818|binding|INFO|Releasing lport e44bf17c-ebb7-4e62-850a-20ff20a74960 from this chassis (sb_readonly=0)
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.890 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.898 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[219cdffa-49c9-4957-ba4e-60bd6b046e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.900 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:37:24 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:24.901 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'env', 'PROCESS_TAG=haproxy-d042f24f-c2f0-4843-9727-cc3720586596', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d042f24f-c2f0-4843-9727-cc3720586596.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:37:24 np0005539552 nova_compute[233724]: 2025-11-29 08:37:24.911 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:25 np0005539552 nova_compute[233724]: 2025-11-29 08:37:25.189 233728 DEBUG nova.network.neutron [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updated VIF entry in instance network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:25 np0005539552 nova_compute[233724]: 2025-11-29 08:37:25.189 233728 DEBUG nova.network.neutron [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:25 np0005539552 podman[308591]: 2025-11-29 08:37:25.300204021 +0000 UTC m=+0.026007401 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:37:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:26.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.358 233728 DEBUG nova.compute.manager [req-c3f983b2-394b-4107-8640-f5fba15dee43 req-6009fde2-856d-4f60-ad8c-8836fe6dcd2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.359 233728 DEBUG oslo_concurrency.lockutils [req-c3f983b2-394b-4107-8640-f5fba15dee43 req-6009fde2-856d-4f60-ad8c-8836fe6dcd2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.359 233728 DEBUG oslo_concurrency.lockutils [req-c3f983b2-394b-4107-8640-f5fba15dee43 req-6009fde2-856d-4f60-ad8c-8836fe6dcd2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.360 233728 DEBUG oslo_concurrency.lockutils [req-c3f983b2-394b-4107-8640-f5fba15dee43 req-6009fde2-856d-4f60-ad8c-8836fe6dcd2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.361 233728 DEBUG nova.compute.manager [req-c3f983b2-394b-4107-8640-f5fba15dee43 req-6009fde2-856d-4f60-ad8c-8836fe6dcd2b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Processing event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.364 233728 DEBUG oslo_concurrency.lockutils [req-70fea9e2-7caa-48f1-9acd-3b752646264e req-b6454fb7-a9a3-48a7-a2a4-fad157b18d30 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:26 np0005539552 podman[308591]: 2025-11-29 08:37:26.395705107 +0000 UTC m=+1.121508497 container create 7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:37:26 np0005539552 systemd[1]: Started libpod-conmon-7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4.scope.
Nov 29 03:37:26 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:37:26 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d693864fbc2da2ccb55910b59ab55d1e01a96325ad026905bda8a3eae5a4a37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:26 np0005539552 podman[308591]: 2025-11-29 08:37:26.514988586 +0000 UTC m=+1.240791986 container init 7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:37:26 np0005539552 podman[308591]: 2025-11-29 08:37:26.525880469 +0000 UTC m=+1.251683849 container start 7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:37:26 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [NOTICE]   (308684) : New worker (308687) forked
Nov 29 03:37:26 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [NOTICE]   (308684) : Loading success.
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.573 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405446.572965, 5748313e-fbb3-409e-83e6-aff548491530 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.574 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Started (Lifecycle Event)#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.575 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.579 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.585 233728 INFO nova.virt.libvirt.driver [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance spawned successfully.#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.586 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.599 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.604 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.618 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.619 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.619 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.620 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.621 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.621 233728 DEBUG nova.virt.libvirt.driver [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.632 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.633 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405446.573244, 5748313e-fbb3-409e-83e6-aff548491530 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.634 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.683 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.689 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405446.5779326, 5748313e-fbb3-409e-83e6-aff548491530 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.690 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:37:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.737 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.742 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.776 233728 INFO nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Took 7.77 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.777 233728 DEBUG nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.778 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.857 233728 INFO nova.compute.manager [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Took 8.73 seconds to build instance.#033[00m
Nov 29 03:37:26 np0005539552 nova_compute[233724]: 2025-11-29 08:37:26.873 233728 DEBUG oslo_concurrency.lockutils [None req-0672588e-c0b5-43cb-8c91-27423e65d401 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:27 np0005539552 nova_compute[233724]: 2025-11-29 08:37:27.000 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:28.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.445 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:37:28.611 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.678 233728 DEBUG nova.compute.manager [req-8d46987a-ee72-40fb-83f4-6de3ff626b99 req-014dde7b-d638-4787-a47b-52fed2a88e9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.679 233728 DEBUG oslo_concurrency.lockutils [req-8d46987a-ee72-40fb-83f4-6de3ff626b99 req-014dde7b-d638-4787-a47b-52fed2a88e9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.679 233728 DEBUG oslo_concurrency.lockutils [req-8d46987a-ee72-40fb-83f4-6de3ff626b99 req-014dde7b-d638-4787-a47b-52fed2a88e9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.680 233728 DEBUG oslo_concurrency.lockutils [req-8d46987a-ee72-40fb-83f4-6de3ff626b99 req-014dde7b-d638-4787-a47b-52fed2a88e9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.680 233728 DEBUG nova.compute.manager [req-8d46987a-ee72-40fb-83f4-6de3ff626b99 req-014dde7b-d638-4787-a47b-52fed2a88e9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] No waiting events found dispatching network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:28 np0005539552 nova_compute[233724]: 2025-11-29 08:37:28.681 233728 WARNING nova.compute.manager [req-8d46987a-ee72-40fb-83f4-6de3ff626b99 req-014dde7b-d638-4787-a47b-52fed2a88e9f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received unexpected event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:37:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:28.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:29 np0005539552 podman[308698]: 2025-11-29 08:37:29.998108041 +0000 UTC m=+0.085850631 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:37:30 np0005539552 podman[308699]: 2025-11-29 08:37:30.012666413 +0000 UTC m=+0.098856991 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 29 03:37:30 np0005539552 podman[308700]: 2025-11-29 08:37:30.036384501 +0000 UTC m=+0.108081929 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:37:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:30.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:30 np0005539552 nova_compute[233724]: 2025-11-29 08:37:30.813 233728 DEBUG nova.compute.manager [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:30 np0005539552 nova_compute[233724]: 2025-11-29 08:37:30.813 233728 DEBUG nova.compute.manager [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing instance network info cache due to event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:30 np0005539552 nova_compute[233724]: 2025-11-29 08:37:30.813 233728 DEBUG oslo_concurrency.lockutils [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:30 np0005539552 nova_compute[233724]: 2025-11-29 08:37:30.814 233728 DEBUG oslo_concurrency.lockutils [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:30 np0005539552 nova_compute[233724]: 2025-11-29 08:37:30.814 233728 DEBUG nova.network.neutron [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:32 np0005539552 nova_compute[233724]: 2025-11-29 08:37:32.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:32.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:32 np0005539552 nova_compute[233724]: 2025-11-29 08:37:32.518 233728 DEBUG nova.network.neutron [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updated VIF entry in instance network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:32 np0005539552 nova_compute[233724]: 2025-11-29 08:37:32.519 233728 DEBUG nova.network.neutron [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:33 np0005539552 nova_compute[233724]: 2025-11-29 08:37:33.320 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405438.3198564, 9722353e-1f13-4d75-97a7-9b251f9385a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:33 np0005539552 nova_compute[233724]: 2025-11-29 08:37:33.321 233728 INFO nova.compute.manager [-] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:37:33 np0005539552 nova_compute[233724]: 2025-11-29 08:37:33.443 233728 DEBUG oslo_concurrency.lockutils [req-47e62357-2843-4aa9-ad5b-29aeab2991e5 req-e25948b1-db4a-4472-b477-df03e7ffc814 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:33 np0005539552 nova_compute[233724]: 2025-11-29 08:37:33.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:33 np0005539552 nova_compute[233724]: 2025-11-29 08:37:33.499 233728 DEBUG nova.compute.manager [None req-50619278-7cc4-47a7-b738-e9bd46a473c1 - - - - - -] [instance: 9722353e-1f13-4d75-97a7-9b251f9385a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:34.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:34.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:36.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:37 np0005539552 nova_compute[233724]: 2025-11-29 08:37:37.004 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539552 nova_compute[233724]: 2025-11-29 08:37:38.448 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:38.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:37:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1597330298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:37:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:37:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1597330298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:37:39 np0005539552 nova_compute[233724]: 2025-11-29 08:37:39.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:40.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:40 np0005539552 nova_compute[233724]: 2025-11-29 08:37:40.889 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:40 np0005539552 nova_compute[233724]: 2025-11-29 08:37:40.890 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:40 np0005539552 nova_compute[233724]: 2025-11-29 08:37:40.891 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:40 np0005539552 nova_compute[233724]: 2025-11-29 08:37:40.891 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:37:40 np0005539552 nova_compute[233724]: 2025-11-29 08:37:40.892 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3667331414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.397 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.680 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.680 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.912 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.913 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4032MB free_disk=20.855499267578125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.913 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:41 np0005539552 nova_compute[233724]: 2025-11-29 08:37:41.913 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.005 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.045 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 5748313e-fbb3-409e-83e6-aff548491530 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.045 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.046 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.094 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:42Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:9f:50 10.100.0.3
Nov 29 03:37:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:37:42Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:9f:50 10.100.0.3
Nov 29 03:37:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1477436822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.614 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.624 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:42.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.762 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.797 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:37:42 np0005539552 nova_compute[233724]: 2025-11-29 08:37:42.797 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:43 np0005539552 nova_compute[233724]: 2025-11-29 08:37:43.451 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:44.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:46.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:47 np0005539552 nova_compute[233724]: 2025-11-29 08:37:47.009 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:48.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539552 nova_compute[233724]: 2025-11-29 08:37:48.454 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539552 nova_compute[233724]: 2025-11-29 08:37:48.799 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539552 nova_compute[233724]: 2025-11-29 08:37:48.799 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539552 nova_compute[233724]: 2025-11-29 08:37:48.800 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539552 nova_compute[233724]: 2025-11-29 08:37:48.800 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:37:49 np0005539552 nova_compute[233724]: 2025-11-29 08:37:49.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:49 np0005539552 nova_compute[233724]: 2025-11-29 08:37:49.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:50.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:50 np0005539552 nova_compute[233724]: 2025-11-29 08:37:50.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:51 np0005539552 nova_compute[233724]: 2025-11-29 08:37:51.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:52 np0005539552 nova_compute[233724]: 2025-11-29 08:37:52.011 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:52.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:53 np0005539552 nova_compute[233724]: 2025-11-29 08:37:53.457 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:54.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:56.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:56 np0005539552 nova_compute[233724]: 2025-11-29 08:37:56.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:56 np0005539552 nova_compute[233724]: 2025-11-29 08:37:56.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:37:56 np0005539552 nova_compute[233724]: 2025-11-29 08:37:56.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:37:57 np0005539552 nova_compute[233724]: 2025-11-29 08:37:57.014 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:57 np0005539552 nova_compute[233724]: 2025-11-29 08:37:57.113 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:57 np0005539552 nova_compute[233724]: 2025-11-29 08:37:57.113 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:57 np0005539552 nova_compute[233724]: 2025-11-29 08:37:57.114 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:37:57 np0005539552 nova_compute[233724]: 2025-11-29 08:37:57.114 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539552 nova_compute[233724]: 2025-11-29 08:37:58.460 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:37:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:58.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:00.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:00.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:01 np0005539552 podman[308876]: 2025-11-29 08:38:01.016483385 +0000 UTC m=+0.087952687 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:01 np0005539552 podman[308875]: 2025-11-29 08:38:01.022899948 +0000 UTC m=+0.095624474 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:38:01 np0005539552 podman[308877]: 2025-11-29 08:38:01.063031788 +0000 UTC m=+0.134344466 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:38:01 np0005539552 nova_compute[233724]: 2025-11-29 08:38:01.118 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:01 np0005539552 nova_compute[233724]: 2025-11-29 08:38:01.149 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:01 np0005539552 nova_compute[233724]: 2025-11-29 08:38:01.150 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:38:02 np0005539552 nova_compute[233724]: 2025-11-29 08:38:02.017 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:02.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:02.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:03 np0005539552 nova_compute[233724]: 2025-11-29 08:38:03.465 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.059988) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484060044, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1097, "num_deletes": 252, "total_data_size": 2176759, "memory_usage": 2221928, "flush_reason": "Manual Compaction"}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484073363, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 1435390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61738, "largest_seqno": 62830, "table_properties": {"data_size": 1430581, "index_size": 2333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11190, "raw_average_key_size": 20, "raw_value_size": 1420623, "raw_average_value_size": 2555, "num_data_blocks": 102, "num_entries": 556, "num_filter_entries": 556, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405405, "oldest_key_time": 1764405405, "file_creation_time": 1764405484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 13753 microseconds, and 7807 cpu microseconds.
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.073721) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 1435390 bytes OK
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.073761) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.077073) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.077119) EVENT_LOG_v1 {"time_micros": 1764405484077104, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.077149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2171360, prev total WAL file size 2171360, number of live WAL files 2.
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.078848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(1401KB)], [123(13MB)]
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484078931, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 15312369, "oldest_snapshot_seqno": -1}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 9215 keys, 13395642 bytes, temperature: kUnknown
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484186512, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 13395642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13334260, "index_size": 37270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23045, "raw_key_size": 243378, "raw_average_key_size": 26, "raw_value_size": 13170157, "raw_average_value_size": 1429, "num_data_blocks": 1426, "num_entries": 9215, "num_filter_entries": 9215, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.186848) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 13395642 bytes
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.188915) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.2 rd, 124.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(20.0) write-amplify(9.3) OK, records in: 9736, records dropped: 521 output_compression: NoCompression
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.188953) EVENT_LOG_v1 {"time_micros": 1764405484188936, "job": 78, "event": "compaction_finished", "compaction_time_micros": 107712, "compaction_time_cpu_micros": 51242, "output_level": 6, "num_output_files": 1, "total_output_size": 13395642, "num_input_records": 9736, "num_output_records": 9215, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484189605, "job": 78, "event": "table_file_deletion", "file_number": 125}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484194374, "job": 78, "event": "table_file_deletion", "file_number": 123}
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.078669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.194552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.194561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.194564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.194569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:04.194575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:04.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:04 np0005539552 nova_compute[233724]: 2025-11-29 08:38:04.664 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:04 np0005539552 nova_compute[233724]: 2025-11-29 08:38:04.664 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:04 np0005539552 nova_compute[233724]: 2025-11-29 08:38:04.665 233728 INFO nova.compute.manager [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Shelving#033[00m
Nov 29 03:38:04 np0005539552 nova_compute[233724]: 2025-11-29 08:38:04.689 233728 DEBUG nova.virt.libvirt.driver [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:38:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:04.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:06.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:06.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:38:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567979303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:38:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:38:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567979303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.019 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 kernel: tap6fb79eb2-29 (unregistering): left promiscuous mode
Nov 29 03:38:07 np0005539552 NetworkManager[48926]: <info>  [1764405487.1075] device (tap6fb79eb2-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.123 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:07Z|00819|binding|INFO|Releasing lport 6fb79eb2-29a8-4947-8bac-7bed17841673 from this chassis (sb_readonly=0)
Nov 29 03:38:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:07Z|00820|binding|INFO|Setting lport 6fb79eb2-29a8-4947-8bac-7bed17841673 down in Southbound
Nov 29 03:38:07 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:07Z|00821|binding|INFO|Removing iface tap6fb79eb2-29 ovn-installed in OVS
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.128 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.134 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9f:50 10.100.0.3'], port_security=['fa:16:3e:8b:9f:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748313e-fbb3-409e-83e6-aff548491530', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c266cfe3-9f5e-4a42-93d4-52df1525211e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=6fb79eb2-29a8-4947-8bac-7bed17841673) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.136 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb79eb2-29a8-4947-8bac-7bed17841673 in datapath d042f24f-c2f0-4843-9727-cc3720586596 unbound from our chassis#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.138 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d042f24f-c2f0-4843-9727-cc3720586596, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.140 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4c9993-fed6-4607-8209-294401ea05d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.141 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace which is not needed anymore#033[00m
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539552 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b4.scope: Consumed 15.628s CPU time.
Nov 29 03:38:07 np0005539552 systemd-machined[196379]: Machine qemu-83-instance-000000b4 terminated.
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.341 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [NOTICE]   (308684) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:07 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [NOTICE]   (308684) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:07 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [WARNING]  (308684) : Exiting Master process...
Nov 29 03:38:07 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [ALERT]    (308684) : Current worker (308687) exited with code 143 (Terminated)
Nov 29 03:38:07 np0005539552 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[308675]: [WARNING]  (308684) : All workers exited. Exiting... (0)
Nov 29 03:38:07 np0005539552 systemd[1]: libpod-7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.353 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 podman[309015]: 2025-11-29 08:38:07.355245002 +0000 UTC m=+0.070495368 container died 7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:38:07 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:07 np0005539552 systemd[1]: var-lib-containers-storage-overlay-7d693864fbc2da2ccb55910b59ab55d1e01a96325ad026905bda8a3eae5a4a37-merged.mount: Deactivated successfully.
Nov 29 03:38:07 np0005539552 podman[309015]: 2025-11-29 08:38:07.410449937 +0000 UTC m=+0.125700313 container cleanup 7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:07 np0005539552 systemd[1]: libpod-conmon-7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539552 podman[309056]: 2025-11-29 08:38:07.497829848 +0000 UTC m=+0.054062936 container remove 7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.507 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5c559575-15f0-4797-8bb5-fea9e6f073de]: (4, ('Sat Nov 29 08:38:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4)\n7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4\nSat Nov 29 08:38:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4)\n7c1cf27c95cd2cc1acef5249ef63f04f51594fce74d4eee46e40ce87000d76e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.510 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5d546df6-6a17-4db1-a13b-ef2173e5a05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.511 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.513 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 kernel: tapd042f24f-c0: left promiscuous mode
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.540 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[21b78e08-fb35-4b4a-8a0e-bb662e5bbbf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.557 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2452a6-56c1-458b-8d9a-711cfff56abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.558 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[723eeaca-47b7-47b2-979b-f84abbaf2d00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.580 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[291cc8c1-2854-4780-8f32-3579a95956f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849023, 'reachable_time': 33917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309075, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.583 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:07.583 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[1ede73d8-c0f1-4399-981b-37474fe46083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:07 np0005539552 systemd[1]: run-netns-ovnmeta\x2dd042f24f\x2dc2f0\x2d4843\x2d9727\x2dcc3720586596.mount: Deactivated successfully.
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.711 233728 INFO nova.virt.libvirt.driver [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.719 233728 INFO nova.virt.libvirt.driver [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance destroyed successfully.#033[00m
Nov 29 03:38:07 np0005539552 nova_compute[233724]: 2025-11-29 08:38:07.719 233728 DEBUG nova.objects.instance [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:08 np0005539552 nova_compute[233724]: 2025-11-29 08:38:08.090 233728 INFO nova.virt.libvirt.driver [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Beginning cold snapshot process#033[00m
Nov 29 03:38:08 np0005539552 nova_compute[233724]: 2025-11-29 08:38:08.276 233728 DEBUG nova.virt.libvirt.imagebackend [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:38:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:08.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:08 np0005539552 nova_compute[233724]: 2025-11-29 08:38:08.468 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:08 np0005539552 nova_compute[233724]: 2025-11-29 08:38:08.536 233728 DEBUG nova.storage.rbd_utils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] creating snapshot(0dcc40e2e1e24980b44d86092ab761c1) on rbd image(5748313e-fbb3-409e-83e6-aff548491530_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:38:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 03:38:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:08 np0005539552 nova_compute[233724]: 2025-11-29 08:38:08.843 233728 DEBUG nova.storage.rbd_utils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] cloning vms/5748313e-fbb3-409e-83e6-aff548491530_disk@0dcc40e2e1e24980b44d86092ab761c1 to images/c373ab38-0e65-4d5c-bf51-8234f0ed5ffd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.456 233728 DEBUG nova.storage.rbd_utils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] flattening images/c373ab38-0e65-4d5c-bf51-8234f0ed5ffd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.509 233728 DEBUG nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-unplugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.510 233728 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.510 233728 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.510 233728 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.511 233728 DEBUG nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] No waiting events found dispatching network-vif-unplugged-6fb79eb2-29a8-4947-8bac-7bed17841673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.511 233728 WARNING nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received unexpected event network-vif-unplugged-6fb79eb2-29a8-4947-8bac-7bed17841673 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.511 233728 DEBUG nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.511 233728 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.512 233728 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.512 233728 DEBUG oslo_concurrency.lockutils [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.512 233728 DEBUG nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] No waiting events found dispatching network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:09 np0005539552 nova_compute[233724]: 2025-11-29 08:38:09.512 233728 WARNING nova.compute.manager [req-7abcf1d4-49b5-464e-be4f-dfe5e5e01dd7 req-0b70ef13-c379-4061-b4c9-1f4db0da7044 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received unexpected event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:38:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:10 np0005539552 nova_compute[233724]: 2025-11-29 08:38:10.392 233728 DEBUG nova.storage.rbd_utils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] removing snapshot(0dcc40e2e1e24980b44d86092ab761c1) on rbd image(5748313e-fbb3-409e-83e6-aff548491530_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:38:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:10.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 03:38:11 np0005539552 nova_compute[233724]: 2025-11-29 08:38:11.258 233728 DEBUG nova.storage.rbd_utils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] creating snapshot(snap) on rbd image(c373ab38-0e65-4d5c-bf51-8234f0ed5ffd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:38:12 np0005539552 nova_compute[233724]: 2025-11-29 08:38:12.024 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 03:38:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:12.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:12.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.471 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.647 233728 INFO nova.virt.libvirt.driver [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Snapshot image upload complete#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.648 233728 DEBUG nova.compute.manager [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.701 233728 INFO nova.compute.manager [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Shelve offloading#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.711 233728 INFO nova.virt.libvirt.driver [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance destroyed successfully.#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.712 233728 DEBUG nova.compute.manager [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.715 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.715 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:13 np0005539552 nova_compute[233724]: 2025-11-29 08:38:13.715 233728 DEBUG nova.network.neutron [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:14.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:14.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.128 233728 DEBUG nova.network.neutron [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.149 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.958 233728 INFO nova.virt.libvirt.driver [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance destroyed successfully.#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.958 233728 DEBUG nova.objects.instance [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'resources' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.970 233728 DEBUG nova.virt.libvirt.vif [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-716574022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-716574022',id=180,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBSzclE1qHhmNtB+Dlhxw2Ps8HWdSPocEaWlsxDme1qVoS+t7CcJ/Bo6lrhqUu/ZA6JT3SxHX6WwNieCHu9AGemn9sAzHapyUGRyjBFuHCFhJn85rjzwwkttpV/QWN0gNg==',key_name='tempest-keypair-1744874454',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-pwfpf00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member',shelved_at='2025-11-29T08:38:13.647915',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='c373ab38-0e65-4d5c-bf51-8234f0ed5ffd'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:38:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=5748313e-fbb3-409e-83e6-aff548491530,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.971 233728 DEBUG nova.network.os_vif_util [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.972 233728 DEBUG nova.network.os_vif_util [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.972 233728 DEBUG os_vif [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.974 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.974 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb79eb2-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.976 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.979 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:15 np0005539552 nova_compute[233724]: 2025-11-29 08:38:15.982 233728 INFO os_vif [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29')#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.059 233728 DEBUG nova.compute.manager [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.060 233728 DEBUG nova.compute.manager [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing instance network info cache due to event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.060 233728 DEBUG oslo_concurrency.lockutils [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.060 233728 DEBUG oslo_concurrency.lockutils [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.060 233728 DEBUG nova.network.neutron [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:16.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.734 233728 INFO nova.virt.libvirt.driver [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deleting instance files /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530_del#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.735 233728 INFO nova.virt.libvirt.driver [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deletion of /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530_del complete#033[00m
Nov 29 03:38:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:16.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.831 233728 INFO nova.scheduler.client.report [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Deleted allocations for instance 5748313e-fbb3-409e-83e6-aff548491530#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.890 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.890 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:16 np0005539552 nova_compute[233724]: 2025-11-29 08:38:16.926 233728 DEBUG oslo_concurrency.processutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:17 np0005539552 nova_compute[233724]: 2025-11-29 08:38:17.025 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/354924099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:17 np0005539552 nova_compute[233724]: 2025-11-29 08:38:17.410 233728 DEBUG oslo_concurrency.processutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:17 np0005539552 nova_compute[233724]: 2025-11-29 08:38:17.418 233728 DEBUG nova.compute.provider_tree [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:17 np0005539552 nova_compute[233724]: 2025-11-29 08:38:17.824 233728 DEBUG nova.scheduler.client.report [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:18 np0005539552 nova_compute[233724]: 2025-11-29 08:38:18.057 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:18 np0005539552 nova_compute[233724]: 2025-11-29 08:38:18.139 233728 DEBUG oslo_concurrency.lockutils [None req-bb2c625b-cb5e-439b-8767-684384f37b03 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:18.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:18 np0005539552 nova_compute[233724]: 2025-11-29 08:38:18.634 233728 DEBUG nova.network.neutron [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updated VIF entry in instance network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:18 np0005539552 nova_compute[233724]: 2025-11-29 08:38:18.635 233728 DEBUG nova.network.neutron [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:18 np0005539552 nova_compute[233724]: 2025-11-29 08:38:18.650 233728 DEBUG oslo_concurrency.lockutils [req-fc92b741-0bd5-4e50-81bd-dbdccf0cabaf req-a2495ef6-4361-4fca-87c3-87384fe31bb8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.063 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.065 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.080 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.152 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.152 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.158 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.159 233728 INFO nova.compute.claims [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:38:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.313 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:20.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:20.643 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:20.644 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:20.644 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1608646034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.776 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.786 233728 DEBUG nova.compute.provider_tree [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:20 np0005539552 nova_compute[233724]: 2025-11-29 08:38:20.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:21 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.033 233728 DEBUG nova.scheduler.client.report [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.071 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.073 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.133 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.134 233728 DEBUG nova.network.neutron [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.149 233728 INFO nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.164 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.260 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.261 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.261 233728 INFO nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Creating image(s)#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.284 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.327 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.358 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.362 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "18b0193a1678e1adf0aa298b46c4af424203b75c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.363 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "18b0193a1678e1adf0aa298b46c4af424203b75c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.367 233728 DEBUG nova.policy [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8b20745b2d14f70b64a43335faed2f4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:38:21 np0005539552 nova_compute[233724]: 2025-11-29 08:38:21.614 233728 DEBUG nova.virt.libvirt.imagebackend [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/91e66a4f-7b40-4f67-810c-3642968fad68/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/91e66a4f-7b40-4f67-810c-3642968fad68/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.027 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733351146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.367 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405487.3621948, 5748313e-fbb3-409e-83e6-aff548491530 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.367 233728 INFO nova.compute.manager [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.392 233728 DEBUG nova.compute.manager [None req-89228b58-a952-4e49-8675-3d42af0a2dde - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:22.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.468 233728 DEBUG nova.network.neutron [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Successfully created port: 2769598c-70f6-487d-b82b-5c1920b0e91d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.699 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.801 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.part --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.803 233728 DEBUG nova.virt.images [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] 91e66a4f-7b40-4f67-810c-3642968fad68 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.805 233728 DEBUG nova.privsep.utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 03:38:22 np0005539552 nova_compute[233724]: 2025-11-29 08:38:22.805 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.part /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.028 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.part /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.converted" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.037 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.106 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.109 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "18b0193a1678e1adf0aa298b46c4af424203b75c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.156 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.161 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c 608e1017-9da7-4ba6-a346-f047562d380b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.301 233728 DEBUG nova.network.neutron [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Successfully updated port: 2769598c-70f6-487d-b82b-5c1920b0e91d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.319 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.319 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.320 233728 DEBUG nova.network.neutron [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.400 233728 DEBUG nova.compute.manager [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.401 233728 DEBUG nova.compute.manager [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.402 233728 DEBUG oslo_concurrency.lockutils [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.520 233728 DEBUG nova.network.neutron [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.608 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c 608e1017-9da7-4ba6-a346-f047562d380b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.718 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] resizing rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.867 233728 DEBUG nova.objects.instance [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'migration_context' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.879 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.879 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Ensure instance console log exists: /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.880 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.880 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.880 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:23.930 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:23 np0005539552 nova_compute[233724]: 2025-11-29 08:38:23.930 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:23.932 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:38:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:24.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.572 233728 DEBUG nova.network.neutron [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.593 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.594 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance network_info: |[{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.595 233728 DEBUG oslo_concurrency.lockutils [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.595 233728 DEBUG nova.network.neutron [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.601 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Start _get_guest_xml network_info=[{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:38:13Z,direct_url=<?>,disk_format='qcow2',id=91e66a4f-7b40-4f67-810c-3642968fad68,min_disk=0,min_ram=0,name='tempest-scenario-img--334853480',owner='8d5e30b74e6449dd90ecb667977d1fe9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:38:15Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '91e66a4f-7b40-4f67-810c-3642968fad68'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.610 233728 WARNING nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.622 233728 DEBUG nova.virt.libvirt.host [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.623 233728 DEBUG nova.virt.libvirt.host [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.628 233728 DEBUG nova.virt.libvirt.host [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.629 233728 DEBUG nova.virt.libvirt.host [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.632 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.633 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T08:38:13Z,direct_url=<?>,disk_format='qcow2',id=91e66a4f-7b40-4f67-810c-3642968fad68,min_disk=0,min_ram=0,name='tempest-scenario-img--334853480',owner='8d5e30b74e6449dd90ecb667977d1fe9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T08:38:15Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.634 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.634 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.634 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.635 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.635 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.636 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.636 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.637 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.637 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.638 233728 DEBUG nova.virt.hardware [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:38:24 np0005539552 nova_compute[233724]: 2025-11-29 08:38:24.644 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:24.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3487485232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.088 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.122 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.126 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1749728916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.565 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.569 233728 DEBUG nova.virt.libvirt.vif [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2075426497',display_name='tempest-TestMinimumBasicScenario-server-2075426497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2075426497',id=183,image_ref='91e66a4f-7b40-4f67-810c-3642968fad68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB08XnJEDg+1P30+zEgbUnzYodIZ3g6KGGPd+kzRxtGlihlN0qT4raS3B+Ikqn+VjW8vQOCmeKi8zzD2w95k51XZW3izu1i7RTrHUi/m2K00I63bNtT+aDtKNR89OWXnQ==',key_name='tempest-TestMinimumBasicScenario-1327155599',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-rh2ezghd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='91e66a4f-7b40-4f67-810c-3642968fad68',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:21Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=608e1017-9da7-4ba6-a346-f047562d380b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.570 233728 DEBUG nova.network.os_vif_util [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.572 233728 DEBUG nova.network.os_vif_util [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.574 233728 DEBUG nova.objects.instance [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.596 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <uuid>608e1017-9da7-4ba6-a346-f047562d380b</uuid>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <name>instance-000000b7</name>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestMinimumBasicScenario-server-2075426497</nova:name>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:38:24</nova:creationTime>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:user uuid="e8b20745b2d14f70b64a43335faed2f4">tempest-TestMinimumBasicScenario-1569311049-project-member</nova:user>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:project uuid="8d5e30b74e6449dd90ecb667977d1fe9">tempest-TestMinimumBasicScenario-1569311049</nova:project>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="91e66a4f-7b40-4f67-810c-3642968fad68"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <nova:port uuid="2769598c-70f6-487d-b82b-5c1920b0e91d">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <entry name="serial">608e1017-9da7-4ba6-a346-f047562d380b</entry>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <entry name="uuid">608e1017-9da7-4ba6-a346-f047562d380b</entry>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/608e1017-9da7-4ba6-a346-f047562d380b_disk">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/608e1017-9da7-4ba6-a346-f047562d380b_disk.config">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:96:d3:7a"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <target dev="tap2769598c-70"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/console.log" append="off"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:38:25 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:38:25 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:38:25 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:38:25 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.599 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Preparing to wait for external event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.599 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.600 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.600 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.601 233728 DEBUG nova.virt.libvirt.vif [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2075426497',display_name='tempest-TestMinimumBasicScenario-server-2075426497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2075426497',id=183,image_ref='91e66a4f-7b40-4f67-810c-3642968fad68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB08XnJEDg+1P30+zEgbUnzYodIZ3g6KGGPd+kzRxtGlihlN0qT4raS3B+Ikqn+VjW8vQOCmeKi8zzD2w95k51XZW3izu1i7RTrHUi/m2K00I63bNtT+aDtKNR89OWXnQ==',key_name='tempest-TestMinimumBasicScenario-1327155599',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-rh2ezghd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='91e66a4f-7b40-4f67-810c-3642968fad68',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:21Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=608e1017-9da7-4ba6-a346-f047562d380b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.601 233728 DEBUG nova.network.os_vif_util [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.601 233728 DEBUG nova.network.os_vif_util [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.602 233728 DEBUG os_vif [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.602 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.603 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.603 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.607 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.607 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2769598c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.608 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2769598c-70, col_values=(('external_ids', {'iface-id': '2769598c-70f6-487d-b82b-5c1920b0e91d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:d3:7a', 'vm-uuid': '608e1017-9da7-4ba6-a346-f047562d380b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.609 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539552 NetworkManager[48926]: <info>  [1764405505.6112] manager: (tap2769598c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.617 233728 INFO os_vif [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70')#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.678 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.680 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.680 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No VIF found with MAC fa:16:3e:96:d3:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.681 233728 INFO nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Using config drive#033[00m
Nov 29 03:38:25 np0005539552 nova_compute[233724]: 2025-11-29 08:38:25.717 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:26 np0005539552 nova_compute[233724]: 2025-11-29 08:38:26.335 233728 INFO nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Creating config drive at /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/disk.config#033[00m
Nov 29 03:38:26 np0005539552 nova_compute[233724]: 2025-11-29 08:38:26.344 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp54zncm4w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539552 nova_compute[233724]: 2025-11-29 08:38:26.502 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp54zncm4w" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:26.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:26 np0005539552 nova_compute[233724]: 2025-11-29 08:38:26.908 233728 DEBUG nova.storage.rbd_utils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] rbd image 608e1017-9da7-4ba6-a346-f047562d380b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:26 np0005539552 nova_compute[233724]: 2025-11-29 08:38:26.913 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/disk.config 608e1017-9da7-4ba6-a346-f047562d380b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.028 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.607 233728 DEBUG oslo_concurrency.processutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/disk.config 608e1017-9da7-4ba6-a346-f047562d380b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.608 233728 INFO nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Deleting local config drive /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.683 233728 DEBUG nova.network.neutron [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.685 233728 DEBUG nova.network.neutron [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:27 np0005539552 kernel: tap2769598c-70: entered promiscuous mode
Nov 29 03:38:27 np0005539552 NetworkManager[48926]: <info>  [1764405507.7005] manager: (tap2769598c-70): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.702 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:27Z|00822|binding|INFO|Claiming lport 2769598c-70f6-487d-b82b-5c1920b0e91d for this chassis.
Nov 29 03:38:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:27Z|00823|binding|INFO|2769598c-70f6-487d-b82b-5c1920b0e91d: Claiming fa:16:3e:96:d3:7a 10.100.0.12
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.706 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.723 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d3:7a 10.100.0.12'], port_security=['fa:16:3e:96:d3:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '608e1017-9da7-4ba6-a346-f047562d380b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2769598c-70f6-487d-b82b-5c1920b0e91d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.725 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2769598c-70f6-487d-b82b-5c1920b0e91d in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 bound to our chassis#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.729 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0430aba5-d0d7-4d98-ad87-552e6639c190#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.736 233728 DEBUG oslo_concurrency.lockutils [req-07ee8692-0f5e-4ce7-af4b-3b8d527c6bbe req-74626295-9949-4ade-82d4-85cba560fbca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:27 np0005539552 nova_compute[233724]: 2025-11-29 08:38:27.740 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:27Z|00824|binding|INFO|Setting lport 2769598c-70f6-487d-b82b-5c1920b0e91d ovn-installed in OVS
Nov 29 03:38:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:27Z|00825|binding|INFO|Setting lport 2769598c-70f6-487d-b82b-5c1920b0e91d up in Southbound
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.747 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bf210365-558d-4f52-adea-d862df03a724]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.748 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0430aba5-d1 in ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:38:27 np0005539552 systemd-udevd[309913]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.750 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0430aba5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.750 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[af60669d-4f48-4ee9-82e0-d375dc3f686f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.751 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fd334c-b88d-4764-82d1-333c5147f5fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 systemd-machined[196379]: New machine qemu-84-instance-000000b7.
Nov 29 03:38:27 np0005539552 NetworkManager[48926]: <info>  [1764405507.7713] device (tap2769598c-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:38:27 np0005539552 NetworkManager[48926]: <info>  [1764405507.7723] device (tap2769598c-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:38:27 np0005539552 systemd[1]: Started Virtual Machine qemu-84-instance-000000b7.
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.776 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[9dae6c5e-f90b-40e3-8d26-64627480a910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.791 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aa632cf8-3b38-44fa-bfba-b52dcea27bfe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.837 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf937c6-121a-472d-a2a0-b351a6d95e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 NetworkManager[48926]: <info>  [1764405507.8441] manager: (tap0430aba5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.843 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b2be6ca9-b963-40e6-bcc4-5c55a537fd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 systemd-udevd[309916]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.880 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[508c8c04-7886-4afa-b267-bd262aca7a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.883 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[f0559bc5-a674-4847-8c23-4e9f3eab96e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 NetworkManager[48926]: <info>  [1764405507.9114] device (tap0430aba5-d0): carrier: link connected
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.919 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb55fb2-7a54-427e-bb04-5641e65135ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.942 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b2877032-f2cc-4ff7-8395-cc385f2ad90f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855359, 'reachable_time': 41252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309945, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.961 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0b23d0-c47f-4f51-8e5c-6596eb3a046a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:fd4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 855359, 'tstamp': 855359}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309946, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:27.982 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8c553d88-e189-4fb4-a05c-e7fd275fe01d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855359, 'reachable_time': 41252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309947, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.015 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4e98ce1e-a8da-43f7-94ce-1e187c6acc6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.093 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1c272d47-e326-4849-9186-250ded8c7b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.094 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.095 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.095 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0430aba5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:28 np0005539552 kernel: tap0430aba5-d0: entered promiscuous mode
Nov 29 03:38:28 np0005539552 NetworkManager[48926]: <info>  [1764405508.0980] manager: (tap0430aba5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.136 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0430aba5-d0, col_values=(('external_ids', {'iface-id': 'dac731d0-69cc-4042-8450-f886e5854f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:28Z|00826|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.137 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.140 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.141 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[37699b3c-b40d-4c19-bb47-9b9ee8de3e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.142 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:38:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:28.144 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'env', 'PROCESS_TAG=haproxy-0430aba5-d0d7-4d98-ad87-552e6639c190', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0430aba5-d0d7-4d98-ad87-552e6639c190.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.159 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.168 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.450 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405508.4490461, 608e1017-9da7-4ba6-a346-f047562d380b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.452 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:38:28 np0005539552 podman[310069]: 2025-11-29 08:38:28.509595991 +0000 UTC m=+0.020608946 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.664 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.670 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405508.449395, 608e1017-9da7-4ba6-a346-f047562d380b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.671 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:38:28 np0005539552 podman[310069]: 2025-11-29 08:38:28.682476053 +0000 UTC m=+0.193489028 container create 5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:38:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.696 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.703 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:28 np0005539552 nova_compute[233724]: 2025-11-29 08:38:28.727 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:28 np0005539552 systemd[1]: Started libpod-conmon-5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df.scope.
Nov 29 03:38:28 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:38:28 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c34122b6419731832b3aa3314012427556a335b7cf3376f5061b71524eced0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:28.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:28 np0005539552 podman[310069]: 2025-11-29 08:38:28.809601923 +0000 UTC m=+0.320614868 container init 5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:38:28 np0005539552 podman[310069]: 2025-11-29 08:38:28.816902179 +0000 UTC m=+0.327915104 container start 5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:38:28 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [NOTICE]   (310090) : New worker (310092) forked
Nov 29 03:38:28 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [NOTICE]   (310090) : Loading success.
Nov 29 03:38:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.407 233728 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.407 233728 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.408 233728 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.408 233728 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.408 233728 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Processing event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.408 233728 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.409 233728 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.409 233728 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.409 233728 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.409 233728 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.410 233728 WARNING nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received unexpected event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.410 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.414 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405510.4141455, 608e1017-9da7-4ba6-a346-f047562d380b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.414 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.416 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.419 233728 INFO nova.virt.libvirt.driver [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance spawned successfully.#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.420 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:38:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:30.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.452 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.457 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.460 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.460 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.460 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.460 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.461 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.461 233728 DEBUG nova.virt.libvirt.driver [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.487 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.524 233728 INFO nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Took 9.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.525 233728 DEBUG nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.604 233728 INFO nova.compute.manager [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Took 10.47 seconds to build instance.#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.611 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539552 nova_compute[233724]: 2025-11-29 08:38:30.627 233728 DEBUG oslo_concurrency.lockutils [None req-bc70a6a5-86bc-4f79-bc3e-8bebbe1fdf8a e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:30.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:30.935 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:31 np0005539552 podman[310104]: 2025-11-29 08:38:31.985870523 +0000 UTC m=+0.069707317 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 03:38:31 np0005539552 podman[310105]: 2025-11-29 08:38:31.993712634 +0000 UTC m=+0.075819621 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:38:32 np0005539552 nova_compute[233724]: 2025-11-29 08:38:32.029 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539552 podman[310106]: 2025-11-29 08:38:32.033683238 +0000 UTC m=+0.103517835 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:38:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 03:38:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:34.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:34 np0005539552 nova_compute[233724]: 2025-11-29 08:38:34.885 233728 DEBUG oslo_concurrency.lockutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:34 np0005539552 nova_compute[233724]: 2025-11-29 08:38:34.885 233728 DEBUG oslo_concurrency.lockutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.576 233728 DEBUG nova.objects.instance [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'flavor' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:35Z|00827|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.597 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.658 233728 DEBUG oslo_concurrency.lockutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:35 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:35Z|00828|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.852 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.903 233728 DEBUG oslo_concurrency.lockutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.903 233728 DEBUG oslo_concurrency.lockutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:35 np0005539552 nova_compute[233724]: 2025-11-29 08:38:35.904 233728 INFO nova.compute.manager [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Attaching volume 4c2de305-8721-44a9-a539-b07eee5e101a to /dev/vdb#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.072 233728 DEBUG os_brick.utils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.074 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.092 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.092 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fbc43b-9533-4791-9cb4-932bbe9de257]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.095 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.111 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.111 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[0712e58c-8041-4b93-a3f5-25dca9a62434]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.114 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.127 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.127 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[37e32714-3795-40bf-939d-5ed24b97d9ba]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.129 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfd1f15-3b30-42d9-80c0-9a9e920b8f1b]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.130 233728 DEBUG oslo_concurrency.processutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.185 233728 DEBUG oslo_concurrency.processutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "nvme version" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.189 233728 DEBUG os_brick.initiator.connectors.lightos [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.190 233728 DEBUG os_brick.initiator.connectors.lightos [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.191 233728 DEBUG os_brick.initiator.connectors.lightos [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.191 233728 DEBUG os_brick.utils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] <== get_connector_properties: return (117ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.192 233728 DEBUG nova.virt.block_device [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating existing volume attachment record: c1401724-e2f9-4264-9485-1111cd333829 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:38:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:38:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583628070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:38:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:38:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583628070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:38:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4107809963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:36 np0005539552 nova_compute[233724]: 2025-11-29 08:38:36.994 233728 DEBUG nova.objects.instance [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'flavor' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.030 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.036 233728 DEBUG nova.virt.libvirt.driver [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Attempting to attach volume 4c2de305-8721-44a9-a539-b07eee5e101a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.039 233728 DEBUG nova.virt.libvirt.guest [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-4c2de305-8721-44a9-a539-b07eee5e101a">
Nov 29 03:38:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:38:37 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:38:37 np0005539552 nova_compute[233724]:  <serial>4c2de305-8721-44a9-a539-b07eee5e101a</serial>
Nov 29 03:38:37 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:38:37 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.160 233728 DEBUG nova.virt.libvirt.driver [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.160 233728 DEBUG nova.virt.libvirt.driver [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.160 233728 DEBUG nova.virt.libvirt.driver [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.161 233728 DEBUG nova.virt.libvirt.driver [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] No VIF found with MAC fa:16:3e:96:d3:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:38:37 np0005539552 nova_compute[233724]: 2025-11-29 08:38:37.416 233728 DEBUG oslo_concurrency.lockutils [None req-b2656514-2a50-4a43-96b2-2a2355cf7b4f e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1189210426' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1189210426' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:38:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:38.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:38.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:38:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2973677856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:38:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:38:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2973677856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.251 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539552 NetworkManager[48926]: <info>  [1764405520.2523] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 29 03:38:40 np0005539552 NetworkManager[48926]: <info>  [1764405520.2540] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:40Z|00829|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:38:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.441 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.601 233728 DEBUG nova.compute.manager [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.601 233728 DEBUG nova.compute.manager [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.601 233728 DEBUG oslo_concurrency.lockutils [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.602 233728 DEBUG oslo_concurrency.lockutils [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.602 233728 DEBUG nova.network.neutron [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.613 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:40.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.951 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.953 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.953 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.954 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:38:40 np0005539552 nova_compute[233724]: 2025-11-29 08:38:40.955 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1383301281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.466 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.542 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.543 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.543 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.787 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.788 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4028MB free_disk=20.92174530029297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.789 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.789 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.867 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 608e1017-9da7-4ba6-a346-f047562d380b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.868 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.869 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:38:41 np0005539552 nova_compute[233724]: 2025-11-29 08:38:41.912 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.033 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.125 233728 DEBUG nova.compute.manager [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.126 233728 DEBUG nova.compute.manager [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.126 233728 DEBUG oslo_concurrency.lockutils [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1158041128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.391 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.397 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.423 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:42.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.455 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.455 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:42 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:42Z|00830|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.602 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.711 233728 DEBUG nova.network.neutron [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.712 233728 DEBUG nova.network.neutron [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.747 233728 DEBUG oslo_concurrency.lockutils [req-4dc7782c-371d-4755-bb45-d525a2a1104b req-bd4b5785-6ada-48d0-9566-7f66b1923be6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.748 233728 DEBUG oslo_concurrency.lockutils [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:42 np0005539552 nova_compute[233724]: 2025-11-29 08:38:42.748 233728 DEBUG nova.network.neutron [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:42.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:43 np0005539552 nova_compute[233724]: 2025-11-29 08:38:43.323 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.231 233728 DEBUG nova.compute.manager [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.232 233728 DEBUG nova.compute.manager [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.232 233728 DEBUG oslo_concurrency.lockutils [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.344 233728 DEBUG nova.network.neutron [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.345 233728 DEBUG nova.network.neutron [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.363 233728 DEBUG oslo_concurrency.lockutils [req-e055265e-6a74-4409-b858-cb5a50361420 req-460e4679-20e5-4b1c-971c-b9e5eb9fcd38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.364 233728 DEBUG oslo_concurrency.lockutils [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:44 np0005539552 nova_compute[233724]: 2025-11-29 08:38:44.364 233728 DEBUG nova.network.neutron [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:44Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:d3:7a 10.100.0.12
Nov 29 03:38:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:44Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:d3:7a 10.100.0.12
Nov 29 03:38:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:44.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.215363) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525215459, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 254, "total_data_size": 1543074, "memory_usage": 1572144, "flush_reason": "Manual Compaction"}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525225125, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 776540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62835, "largest_seqno": 63687, "table_properties": {"data_size": 772681, "index_size": 1574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10208, "raw_average_key_size": 21, "raw_value_size": 764573, "raw_average_value_size": 1623, "num_data_blocks": 67, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405485, "oldest_key_time": 1764405485, "file_creation_time": 1764405525, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 9899 microseconds, and 5181 cpu microseconds.
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.225265) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 776540 bytes OK
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.225287) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.227142) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.227165) EVENT_LOG_v1 {"time_micros": 1764405525227159, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.227186) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1538561, prev total WAL file size 1538561, number of live WAL files 2.
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.228160) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(758KB)], [126(12MB)]
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525228196, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 14172182, "oldest_snapshot_seqno": -1}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9174 keys, 10517633 bytes, temperature: kUnknown
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525327845, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 10517633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10460497, "index_size": 33087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 242883, "raw_average_key_size": 26, "raw_value_size": 10301119, "raw_average_value_size": 1122, "num_data_blocks": 1252, "num_entries": 9174, "num_filter_entries": 9174, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405525, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.328153) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10517633 bytes
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.329510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.1 rd, 105.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(31.8) write-amplify(13.5) OK, records in: 9686, records dropped: 512 output_compression: NoCompression
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.329539) EVENT_LOG_v1 {"time_micros": 1764405525329526, "job": 80, "event": "compaction_finished", "compaction_time_micros": 99723, "compaction_time_cpu_micros": 47245, "output_level": 6, "num_output_files": 1, "total_output_size": 10517633, "num_input_records": 9686, "num_output_records": 9174, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525329918, "job": 80, "event": "table_file_deletion", "file_number": 128}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525334122, "job": 80, "event": "table_file_deletion", "file_number": 126}
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.228027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.334207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.334216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.334219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.334222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:38:45.334225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539552 nova_compute[233724]: 2025-11-29 08:38:45.615 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:46 np0005539552 nova_compute[233724]: 2025-11-29 08:38:46.428 233728 DEBUG nova.network.neutron [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:46 np0005539552 nova_compute[233724]: 2025-11-29 08:38:46.429 233728 DEBUG nova.network.neutron [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:46 np0005539552 nova_compute[233724]: 2025-11-29 08:38:46.442 233728 DEBUG oslo_concurrency.lockutils [req-7fcb9654-2072-4a9d-8e7e-dc814bfb7d3f req-8612366c-d983-40e3-ad18-39e18ac02355 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:46.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:47 np0005539552 nova_compute[233724]: 2025-11-29 08:38:47.036 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:47 np0005539552 nova_compute[233724]: 2025-11-29 08:38:47.794 233728 DEBUG nova.compute.manager [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:47 np0005539552 nova_compute[233724]: 2025-11-29 08:38:47.794 233728 DEBUG nova.compute.manager [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:47 np0005539552 nova_compute[233724]: 2025-11-29 08:38:47.795 233728 DEBUG oslo_concurrency.lockutils [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:47 np0005539552 nova_compute[233724]: 2025-11-29 08:38:47.795 233728 DEBUG oslo_concurrency.lockutils [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:47 np0005539552 nova_compute[233724]: 2025-11-29 08:38:47.796 233728 DEBUG nova.network.neutron [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.350 233728 DEBUG nova.network.neutron [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.351 233728 DEBUG nova.network.neutron [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.369 233728 DEBUG oslo_concurrency.lockutils [req-0c0925df-70e7-425d-a6b8-797d69ff1c0a req-95680b1f-226e-4ef9-9d25-99a3fccfba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.455 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.456 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.456 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:38:49 np0005539552 nova_compute[233724]: 2025-11-29 08:38:49.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:50 np0005539552 nova_compute[233724]: 2025-11-29 08:38:50.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:50.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:50 np0005539552 nova_compute[233724]: 2025-11-29 08:38:50.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.448 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.692 233728 DEBUG oslo_concurrency.lockutils [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.693 233728 DEBUG oslo_concurrency.lockutils [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.694 233728 INFO nova.compute.manager [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Rebooting instance#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.718 233728 DEBUG oslo_concurrency.lockutils [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.719 233728 DEBUG oslo_concurrency.lockutils [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.720 233728 DEBUG nova.network.neutron [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:51 np0005539552 nova_compute[233724]: 2025-11-29 08:38:51.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:52 np0005539552 nova_compute[233724]: 2025-11-29 08:38:52.039 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:52.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:52 np0005539552 nova_compute[233724]: 2025-11-29 08:38:52.921 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:53 np0005539552 nova_compute[233724]: 2025-11-29 08:38:53.389 233728 DEBUG nova.network.neutron [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:53 np0005539552 nova_compute[233724]: 2025-11-29 08:38:53.416 233728 DEBUG oslo_concurrency.lockutils [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:53 np0005539552 nova_compute[233724]: 2025-11-29 08:38:53.418 233728 DEBUG nova.compute.manager [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:55 np0005539552 nova_compute[233724]: 2025-11-29 08:38:55.619 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:55 np0005539552 kernel: tap2769598c-70 (unregistering): left promiscuous mode
Nov 29 03:38:55 np0005539552 NetworkManager[48926]: <info>  [1764405535.9794] device (tap2769598c-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:55 np0005539552 nova_compute[233724]: 2025-11-29 08:38:55.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:55Z|00831|binding|INFO|Releasing lport 2769598c-70f6-487d-b82b-5c1920b0e91d from this chassis (sb_readonly=0)
Nov 29 03:38:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:55Z|00832|binding|INFO|Setting lport 2769598c-70f6-487d-b82b-5c1920b0e91d down in Southbound
Nov 29 03:38:55 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:55Z|00833|binding|INFO|Removing iface tap2769598c-70 ovn-installed in OVS
Nov 29 03:38:55 np0005539552 nova_compute[233724]: 2025-11-29 08:38:55.992 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.005 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d3:7a 10.100.0.12'], port_security=['fa:16:3e:96:d3:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '608e1017-9da7-4ba6-a346-f047562d380b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d f54021b2-bca9-4a95-baa2-be07d8727376', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2769598c-70f6-487d-b82b-5c1920b0e91d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.008 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2769598c-70f6-487d-b82b-5c1920b0e91d in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 unbound from our chassis#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.011 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0430aba5-d0d7-4d98-ad87-552e6639c190, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.013 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b907a306-965b-447e-92d2-dae39a506e79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.014 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace which is not needed anymore#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Nov 29 03:38:56 np0005539552 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b7.scope: Consumed 15.402s CPU time.
Nov 29 03:38:56 np0005539552 systemd-machined[196379]: Machine qemu-84-instance-000000b7 terminated.
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.062 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [NOTICE]   (310090) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:56 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [NOTICE]   (310090) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:56 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [WARNING]  (310090) : Exiting Master process...
Nov 29 03:38:56 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [ALERT]    (310090) : Current worker (310092) exited with code 143 (Terminated)
Nov 29 03:38:56 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310085]: [WARNING]  (310090) : All workers exited. Exiting... (0)
Nov 29 03:38:56 np0005539552 systemd[1]: libpod-5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df.scope: Deactivated successfully.
Nov 29 03:38:56 np0005539552 podman[310333]: 2025-11-29 08:38:56.187925451 +0000 UTC m=+0.071808394 container died 5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.392 233728 DEBUG nova.compute.manager [req-7a0097d2-6c71-4b23-a879-a65274605d12 req-5638325a-13ab-4564-b23b-bc735ab7ad3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-unplugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.393 233728 DEBUG oslo_concurrency.lockutils [req-7a0097d2-6c71-4b23-a879-a65274605d12 req-5638325a-13ab-4564-b23b-bc735ab7ad3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.393 233728 DEBUG oslo_concurrency.lockutils [req-7a0097d2-6c71-4b23-a879-a65274605d12 req-5638325a-13ab-4564-b23b-bc735ab7ad3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.393 233728 DEBUG oslo_concurrency.lockutils [req-7a0097d2-6c71-4b23-a879-a65274605d12 req-5638325a-13ab-4564-b23b-bc735ab7ad3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.393 233728 DEBUG nova.compute.manager [req-7a0097d2-6c71-4b23-a879-a65274605d12 req-5638325a-13ab-4564-b23b-bc735ab7ad3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-unplugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.393 233728 WARNING nova.compute.manager [req-7a0097d2-6c71-4b23-a879-a65274605d12 req-5638325a-13ab-4564-b23b-bc735ab7ad3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received unexpected event network-vif-unplugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with vm_state active and task_state reboot_started.#033[00m
Nov 29 03:38:56 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:56 np0005539552 systemd[1]: var-lib-containers-storage-overlay-99c34122b6419731832b3aa3314012427556a335b7cf3376f5061b71524eced0-merged.mount: Deactivated successfully.
Nov 29 03:38:56 np0005539552 podman[310333]: 2025-11-29 08:38:56.434079247 +0000 UTC m=+0.317962180 container cleanup 5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:38:56 np0005539552 systemd[1]: libpod-conmon-5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df.scope: Deactivated successfully.
Nov 29 03:38:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:56 np0005539552 podman[310374]: 2025-11-29 08:38:56.507057341 +0000 UTC m=+0.049639747 container remove 5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.518 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8a201c40-3859-4fa9-b38a-8cdab7549fb1]: (4, ('Sat Nov 29 08:38:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df)\n5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df\nSat Nov 29 08:38:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df)\n5b9305a8d008cd4847f1c3ac0eaa3ed24e5743dfe5f44989e7f52582620208df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.519 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[007efc9a-a51f-4657-ba34-fab6a97e9793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.520 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:56 np0005539552 kernel: tap0430aba5-d0: left promiscuous mode
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.522 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.538 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.541 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[545fc41e-5dea-4ecd-b35c-262d6d0f97d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.557 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7e9eab-ba22-45c6-bd96-68f74294d109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.558 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c2ed02-f71d-4dbe-b57e-f8a2ed0625b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.573 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[243ce641-e7e0-44b4-87aa-d7fc756c89ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855351, 'reachable_time': 15795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310392, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.576 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.577 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[22ed4b72-2def-49d9-af33-49dad999842e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 systemd[1]: run-netns-ovnmeta\x2d0430aba5\x2dd0d7\x2d4d98\x2dad87\x2d552e6639c190.mount: Deactivated successfully.
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.608 233728 INFO nova.virt.libvirt.driver [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance shutdown successfully.#033[00m
Nov 29 03:38:56 np0005539552 kernel: tap2769598c-70: entered promiscuous mode
Nov 29 03:38:56 np0005539552 systemd-udevd[310315]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:56 np0005539552 NetworkManager[48926]: <info>  [1764405536.6712] manager: (tap2769598c-70): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Nov 29 03:38:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:56Z|00834|binding|INFO|Claiming lport 2769598c-70f6-487d-b82b-5c1920b0e91d for this chassis.
Nov 29 03:38:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:56Z|00835|binding|INFO|2769598c-70f6-487d-b82b-5c1920b0e91d: Claiming fa:16:3e:96:d3:7a 10.100.0.12
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.671 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 NetworkManager[48926]: <info>  [1764405536.6854] device (tap2769598c-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:38:56 np0005539552 NetworkManager[48926]: <info>  [1764405536.6861] device (tap2769598c-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:38:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:56Z|00836|binding|INFO|Setting lport 2769598c-70f6-487d-b82b-5c1920b0e91d ovn-installed in OVS
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 systemd-machined[196379]: New machine qemu-85-instance-000000b7.
Nov 29 03:38:56 np0005539552 nova_compute[233724]: 2025-11-29 08:38:56.713 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539552 systemd[1]: Started Virtual Machine qemu-85-instance-000000b7.
Nov 29 03:38:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:56Z|00837|binding|INFO|Setting lport 2769598c-70f6-487d-b82b-5c1920b0e91d up in Southbound
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.879 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d3:7a 10.100.0.12'], port_security=['fa:16:3e:96:d3:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '608e1017-9da7-4ba6-a346-f047562d380b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d f54021b2-bca9-4a95-baa2-be07d8727376', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2769598c-70f6-487d-b82b-5c1920b0e91d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.880 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2769598c-70f6-487d-b82b-5c1920b0e91d in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 bound to our chassis#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.882 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0430aba5-d0d7-4d98-ad87-552e6639c190#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.894 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b864e391-cb9e-46d9-a979-0eaf508dbaf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.895 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0430aba5-d1 in ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.897 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0430aba5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.897 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1c64b4-3b43-4b83-872d-d37b29990b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.898 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1ccedb-3d79-4c9e-aba2-4148fe8bafea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.917 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[d8859a7c-57a9-4949-8181-0e92d693c49f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:56.943 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba6ef81-4199-468d-a2dd-8fa5491f7d88]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.006 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fa61258b-02f0-4698-8823-5835e67df889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.017 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[32f817c9-2e68-4d73-9ef7-b95538400ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 NetworkManager[48926]: <info>  [1764405537.0193] manager: (tap0430aba5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.041 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.057 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[344b8d6f-6c8a-4592-8662-540fee89edd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.062 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[215a5cc5-725e-46f2-b18a-8f445becd863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 NetworkManager[48926]: <info>  [1764405537.1011] device (tap0430aba5-d0): carrier: link connected
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.106 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb6fe11-3d97-4b4c-b7ef-ae57880bb82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.126 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[16995624-8682-4654-8215-05ae45d62faf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858278, 'reachable_time': 17874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310476, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.146 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[42ad8fdd-ef2b-4f2e-af12-31b75704011c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:fd4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858278, 'tstamp': 858278}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310492, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.167 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f6f565-9f96-4323-903c-9c56139e0af0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0430aba5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:fd:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858278, 'reachable_time': 17874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310496, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.201 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bf29f752-ecc5-4a10-ba94-ba17d9484339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.274 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aa4a44-438b-4e89-abdb-08303d18052d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.276 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.277 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.276 233728 DEBUG nova.virt.libvirt.host [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Removed pending event for 608e1017-9da7-4ba6-a346-f047562d380b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.277 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405537.2764027, 608e1017-9da7-4ba6-a346-f047562d380b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.277 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.277 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0430aba5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.279 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539552 NetworkManager[48926]: <info>  [1764405537.2805] manager: (tap0430aba5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 29 03:38:57 np0005539552 kernel: tap0430aba5-d0: entered promiscuous mode
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.282 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.283 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0430aba5-d0, col_values=(('external_ids', {'iface-id': 'dac731d0-69cc-4042-8450-f886e5854f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.285 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:38:57Z|00838|binding|INFO|Releasing lport dac731d0-69cc-4042-8450-f886e5854f80 from this chassis (sb_readonly=0)
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.285 233728 INFO nova.virt.libvirt.driver [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance running successfully.#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.286 233728 INFO nova.virt.libvirt.driver [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance soft rebooted successfully.#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.286 233728 DEBUG nova.compute.manager [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.288 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.290 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c4541de6-e163-4940-b462-ba79a7f30507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.291 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/0430aba5-d0d7-4d98-ad87-552e6639c190.pid.haproxy
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 0430aba5-d0d7-4d98-ad87-552e6639c190
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:38:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:38:57.292 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'env', 'PROCESS_TAG=haproxy-0430aba5-d0d7-4d98-ad87-552e6639c190', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0430aba5-d0d7-4d98-ad87-552e6639c190.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.299 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.314 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.319 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.358 233728 DEBUG oslo_concurrency.lockutils [None req-725c0376-dcf1-4285-8cdc-8f3ae09381c4 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.360 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.360 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405537.278122, 608e1017-9da7-4ba6-a346-f047562d380b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.360 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.387 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.392 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:57 np0005539552 podman[310534]: 2025-11-29 08:38:57.722555499 +0000 UTC m=+0.065602627 container create 3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:38:57 np0005539552 systemd[1]: Started libpod-conmon-3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778.scope.
Nov 29 03:38:57 np0005539552 podman[310534]: 2025-11-29 08:38:57.693659371 +0000 UTC m=+0.036706519 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:38:57 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:38:57 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eda304fb912086f3215d499bff50ee06a6a4cf0e824fbbdac30fffc943709e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:57 np0005539552 podman[310534]: 2025-11-29 08:38:57.833049143 +0000 UTC m=+0.176096361 container init 3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:38:57 np0005539552 podman[310534]: 2025-11-29 08:38:57.838791668 +0000 UTC m=+0.181838826 container start 3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:38:57 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [NOTICE]   (310554) : New worker (310556) forked
Nov 29 03:38:57 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [NOTICE]   (310554) : Loading success.
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:38:57 np0005539552 nova_compute[233724]: 2025-11-29 08:38:57.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.119 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.120 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.120 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.120 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.551 233728 DEBUG nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.551 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.551 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.552 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.552 233728 DEBUG nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.552 233728 WARNING nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received unexpected event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.552 233728 DEBUG nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.553 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.553 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.553 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.553 233728 DEBUG nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.553 233728 WARNING nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received unexpected event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.554 233728 DEBUG nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.554 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.554 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.554 233728 DEBUG oslo_concurrency.lockutils [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.554 233728 DEBUG nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:58 np0005539552 nova_compute[233724]: 2025-11-29 08:38:58.555 233728 WARNING nova.compute.manager [req-d7c4a0ed-ad8b-40ba-ad5a-2b38a6d46f22 req-de244f26-20dd-496e-b586-eeab9afe2caa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received unexpected event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:38:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:38:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:59 np0005539552 nova_compute[233724]: 2025-11-29 08:38:59.543 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:59 np0005539552 nova_compute[233724]: 2025-11-29 08:38:59.559 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:59 np0005539552 nova_compute[233724]: 2025-11-29 08:38:59.560 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:39:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.004000107s ======
Nov 29 03:39:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Nov 29 03:39:00 np0005539552 nova_compute[233724]: 2025-11-29 08:39:00.620 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:01 np0005539552 nova_compute[233724]: 2025-11-29 08:39:01.493 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539552 nova_compute[233724]: 2025-11-29 08:39:02.044 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:02.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:02.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:02 np0005539552 podman[310569]: 2025-11-29 08:39:02.985376019 +0000 UTC m=+0.069164083 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:39:03 np0005539552 podman[310570]: 2025-11-29 08:39:03.000890777 +0000 UTC m=+0.080773665 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:39:03 np0005539552 podman[310571]: 2025-11-29 08:39:03.032408085 +0000 UTC m=+0.108823900 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 03:39:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:04.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:05 np0005539552 nova_compute[233724]: 2025-11-29 08:39:05.438 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:05 np0005539552 nova_compute[233724]: 2025-11-29 08:39:05.555 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:05 np0005539552 nova_compute[233724]: 2025-11-29 08:39:05.623 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:06.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:06.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:07 np0005539552 nova_compute[233724]: 2025-11-29 08:39:07.046 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 03:39:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:08.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:10 np0005539552 nova_compute[233724]: 2025-11-29 08:39:10.625 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:10Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:d3:7a 10.100.0.12
Nov 29 03:39:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:10.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539552 nova_compute[233724]: 2025-11-29 08:39:12.049 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:12.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:15 np0005539552 nova_compute[233724]: 2025-11-29 08:39:15.627 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:16.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:17 np0005539552 nova_compute[233724]: 2025-11-29 08:39:17.051 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:18.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:20.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:20 np0005539552 nova_compute[233724]: 2025-11-29 08:39:20.629 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:20.645 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:20.645 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:20.646 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:20.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.632 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.633 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.651 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.738 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.739 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.756 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.756 233728 INFO nova.compute.claims [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:39:21 np0005539552 nova_compute[233724]: 2025-11-29 08:39:21.855 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.053 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/807526427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.339 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.346 233728 DEBUG nova.compute.provider_tree [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.360 233728 DEBUG nova.scheduler.client.report [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.386 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.387 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.471 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.472 233728 DEBUG nova.network.neutron [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.490 233728 INFO nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:39:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:22.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.516 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.594 233728 INFO nova.virt.block_device [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Booting with volume 36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3 at /dev/vda#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.751 233728 DEBUG nova.policy [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'facf4db8501041ab9628ff9f5684c992', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.768 233728 DEBUG os_brick.utils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.770 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.787 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.787 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[092f4eb0-f032-4dca-9758-c3429f35c182]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.789 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.801 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.801 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[94a5c32c-01e9-46ab-9a4e-e60a55ecefaa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.803 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.817 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.817 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[af3bdbf4-b484-4c42-8863-a38967b3eb72]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.819 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[94dcbde3-2552-40ec-ab48-c8c3fe385d51]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.819 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:22.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.860 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.863 233728 DEBUG os_brick.initiator.connectors.lightos [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.864 233728 DEBUG os_brick.initiator.connectors.lightos [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.864 233728 DEBUG os_brick.initiator.connectors.lightos [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.864 233728 DEBUG os_brick.utils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:39:22 np0005539552 nova_compute[233724]: 2025-11-29 08:39:22.865 233728 DEBUG nova.virt.block_device [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating existing volume attachment record: 3b30ba89-b316-4aa5-bdad-bc6b3c4d20bb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:39:23 np0005539552 nova_compute[233724]: 2025-11-29 08:39:23.578 233728 DEBUG nova.network.neutron [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Successfully created port: dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:39:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:23 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255736674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.017 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.019 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.020 233728 INFO nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Creating image(s)#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.021 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.021 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Ensure instance console log exists: /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.022 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.023 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.023 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:24.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.789 233728 DEBUG nova.network.neutron [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Successfully updated port: dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:24.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.933 233728 DEBUG nova.compute.manager [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.934 233728 DEBUG nova.compute.manager [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.934 233728 DEBUG oslo_concurrency.lockutils [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.935 233728 DEBUG oslo_concurrency.lockutils [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.935 233728 DEBUG nova.network.neutron [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:24 np0005539552 nova_compute[233724]: 2025-11-29 08:39:24.944 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:25 np0005539552 nova_compute[233724]: 2025-11-29 08:39:25.131 233728 DEBUG nova.network.neutron [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:25 np0005539552 nova_compute[233724]: 2025-11-29 08:39:25.632 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:25 np0005539552 nova_compute[233724]: 2025-11-29 08:39:25.822 233728 DEBUG nova.network.neutron [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:25 np0005539552 nova_compute[233724]: 2025-11-29 08:39:25.848 233728 DEBUG oslo_concurrency.lockutils [req-ddd90f16-3730-4ec8-bfed-40a511aa70d6 req-bf50bd2c-fd63-43e0-b264-83ef0509b0d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:25 np0005539552 nova_compute[233724]: 2025-11-29 08:39:25.849 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:25 np0005539552 nova_compute[233724]: 2025-11-29 08:39:25.849 233728 DEBUG nova.network.neutron [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:26 np0005539552 nova_compute[233724]: 2025-11-29 08:39:26.150 233728 DEBUG nova.network.neutron [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:26.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:26.852 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:26 np0005539552 nova_compute[233724]: 2025-11-29 08:39:26.853 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:26.853 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:39:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:26.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.055 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1309058798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.227 233728 DEBUG nova.network.neutron [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.248 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.248 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Instance network_info: |[{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.254 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Start _get_guest_xml network_info=[{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ee3de57a-2652-4819-880a-6217c00a67a0', 'attached_at': '', 'detached_at': '', 'volume_id': '36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3', 'serial': '36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '3b30ba89-b316-4aa5-bdad-bc6b3c4d20bb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.260 233728 WARNING nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.266 233728 DEBUG nova.virt.libvirt.host [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.267 233728 DEBUG nova.virt.libvirt.host [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.270 233728 DEBUG nova.virt.libvirt.host [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.271 233728 DEBUG nova.virt.libvirt.host [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.273 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.274 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.275 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.275 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.276 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.276 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.277 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.278 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.278 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.279 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.279 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.280 233728 DEBUG nova.virt.hardware [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.328 233728 DEBUG nova.storage.rbd_utils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image ee3de57a-2652-4819-880a-6217c00a67a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.335 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/949326258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.810 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.835 233728 DEBUG nova.virt.libvirt.vif [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-94439515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-94439515',id=186,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-nd3eyux6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:22Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=ee3de57a-2652-4819-880a-6217c00a67a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.836 233728 DEBUG nova.network.os_vif_util [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.837 233728 DEBUG nova.network.os_vif_util [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.838 233728 DEBUG nova.objects.instance [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.857 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <uuid>ee3de57a-2652-4819-880a-6217c00a67a0</uuid>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <name>instance-000000ba</name>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-94439515</nova:name>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:39:27</nova:creationTime>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:user uuid="facf4db8501041ab9628ff9f5684c992">tempest-TestInstancesWithCinderVolumes-911868990-project-member</nova:user>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:project uuid="62ca01275fe34ea0af31d00b34d6d9a5">tempest-TestInstancesWithCinderVolumes-911868990</nova:project>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <nova:port uuid="dfb097a1-c82e-41b6-9a9f-57a7771a6e0a">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <entry name="serial">ee3de57a-2652-4819-880a-6217c00a67a0</entry>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <entry name="uuid">ee3de57a-2652-4819-880a-6217c00a67a0</entry>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/ee3de57a-2652-4819-880a-6217c00a67a0_disk.config">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <serial>36eafc77-7bfd-44e7-b2f9-bca3cee5c1b3</serial>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:0b:a6:18"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <target dev="tapdfb097a1-c8"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/console.log" append="off"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:39:27 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:39:27 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:39:27 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:39:27 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.859 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Preparing to wait for external event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.859 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.860 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.860 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.861 233728 DEBUG nova.virt.libvirt.vif [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-94439515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-94439515',id=186,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-nd3eyux6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:22Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=ee3de57a-2652-4819-880a-6217c00a67a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.861 233728 DEBUG nova.network.os_vif_util [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.862 233728 DEBUG nova.network.os_vif_util [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.863 233728 DEBUG os_vif [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.863 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.864 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.864 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.869 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfb097a1-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.870 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdfb097a1-c8, col_values=(('external_ids', {'iface-id': 'dfb097a1-c82e-41b6-9a9f-57a7771a6e0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:a6:18', 'vm-uuid': 'ee3de57a-2652-4819-880a-6217c00a67a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:27 np0005539552 NetworkManager[48926]: <info>  [1764405567.8730] manager: (tapdfb097a1-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.875 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.882 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.883 233728 INFO os_vif [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8')#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.960 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.961 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.961 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:0b:a6:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:27 np0005539552 nova_compute[233724]: 2025-11-29 08:39:27.962 233728 INFO nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Using config drive#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.004 233728 DEBUG nova.storage.rbd_utils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image ee3de57a-2652-4819-880a-6217c00a67a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.419 233728 INFO nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Creating config drive at /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/disk.config#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.429 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8al5t4x4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.566 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8al5t4x4" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.598 233728 DEBUG nova.storage.rbd_utils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image ee3de57a-2652-4819-880a-6217c00a67a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.602 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/disk.config ee3de57a-2652-4819-880a-6217c00a67a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.826 233728 DEBUG oslo_concurrency.processutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/disk.config ee3de57a-2652-4819-880a-6217c00a67a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.827 233728 INFO nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Deleting local config drive /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:39:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:28 np0005539552 kernel: tapdfb097a1-c8: entered promiscuous mode
Nov 29 03:39:28 np0005539552 NetworkManager[48926]: <info>  [1764405568.8824] manager: (tapdfb097a1-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Nov 29 03:39:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:28Z|00839|binding|INFO|Claiming lport dfb097a1-c82e-41b6-9a9f-57a7771a6e0a for this chassis.
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:28Z|00840|binding|INFO|dfb097a1-c82e-41b6-9a9f-57a7771a6e0a: Claiming fa:16:3e:0b:a6:18 10.100.0.9
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.897 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:a6:18 10.100.0.9'], port_security=['fa:16:3e:0b:a6:18 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ee3de57a-2652-4819-880a-6217c00a67a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c114bc23-cd62-4198-a95d-5595953a88bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ca74358-0566-4f32-a6ba-a0c4dcd1723c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cd3a0f0-9ad7-457d-b2e3-d5300cfee042, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.898 143400 INFO neutron.agent.ovn.metadata.agent [-] Port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a in datapath c114bc23-cd62-4198-a95d-5595953a88bd bound to our chassis#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.900 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c114bc23-cd62-4198-a95d-5595953a88bd#033[00m
Nov 29 03:39:28 np0005539552 systemd-udevd[311007]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.914 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[de301ed7-6240-4f89-bbf8-ab6d2794bb78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.915 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc114bc23-c1 in ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.917 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc114bc23-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.917 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bce81274-f9e6-40de-8af6-2228ed503a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.918 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4af6635e-351e-49c8-b1a9-e2b461532f2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:28Z|00841|binding|INFO|Setting lport dfb097a1-c82e-41b6-9a9f-57a7771a6e0a ovn-installed in OVS
Nov 29 03:39:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:28Z|00842|binding|INFO|Setting lport dfb097a1-c82e-41b6-9a9f-57a7771a6e0a up in Southbound
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.919 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:28 np0005539552 nova_compute[233724]: 2025-11-29 08:39:28.923 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:28 np0005539552 NetworkManager[48926]: <info>  [1764405568.9243] device (tapdfb097a1-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:28 np0005539552 NetworkManager[48926]: <info>  [1764405568.9253] device (tapdfb097a1-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.932 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[95c86300-1092-468e-8c17-700b03b10b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 systemd-machined[196379]: New machine qemu-86-instance-000000ba.
Nov 29 03:39:28 np0005539552 systemd[1]: Started Virtual Machine qemu-86-instance-000000ba.
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.959 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[06631214-d843-4c88-a859-50e00ddde696]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.990 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d7068d12-dc27-49d3-95dc-1bb9abc28b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:28.994 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1bc885-5435-420a-8cc8-5424535c59a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:28 np0005539552 NetworkManager[48926]: <info>  [1764405568.9960] manager: (tapc114bc23-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Nov 29 03:39:28 np0005539552 systemd-udevd[311012]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.028 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[85eded23-e41b-4396-8dd9-04b57d19ec1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.030 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5b05842b-1cbf-4e5a-b6fe-df7cfdb9550f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 NetworkManager[48926]: <info>  [1764405569.0507] device (tapc114bc23-c0): carrier: link connected
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.056 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[38303687-fa9b-4f29-b416-2585ea06e4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.072 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3705c66d-76aa-4aed-a85f-5ed5a58d77ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc114bc23-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:78:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861473, 'reachable_time': 17134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311045, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.085 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6db267a7-c9a0-4707-bd68-6bef3604357f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:784d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861473, 'tstamp': 861473}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311048, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.105 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6b7500-27da-4303-ae75-ea17ec979d2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc114bc23-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:78:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861473, 'reachable_time': 17134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311049, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.129 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[db7b0054-c96a-41ac-95eb-67b73c76680f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.189 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c6160531-3c39-42ab-9e0a-c3bf25867121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.191 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc114bc23-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.191 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.191 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc114bc23-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:29 np0005539552 kernel: tapc114bc23-c0: entered promiscuous mode
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.193 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539552 NetworkManager[48926]: <info>  [1764405569.1936] manager: (tapc114bc23-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.194 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.201 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc114bc23-c0, col_values=(('external_ids', {'iface-id': '1642a0e3-a8d4-4ee4-8971-26f27541a04e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.202 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:29Z|00843|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.202 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.212 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.213 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f3be3c7d-c5fc-40d9-983b-875f22ec098e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.216 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.216 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-c114bc23-cd62-4198-a95d-5595953a88bd
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID c114bc23-cd62-4198-a95d-5595953a88bd
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:29.217 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'env', 'PROCESS_TAG=haproxy-c114bc23-cd62-4198-a95d-5595953a88bd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c114bc23-cd62-4198-a95d-5595953a88bd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.465 233728 DEBUG nova.compute.manager [req-84f62ff2-de87-48af-ba33-b261167fa5f2 req-901d68e3-c91e-4010-b221-485fc2fa4f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.465 233728 DEBUG oslo_concurrency.lockutils [req-84f62ff2-de87-48af-ba33-b261167fa5f2 req-901d68e3-c91e-4010-b221-485fc2fa4f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.465 233728 DEBUG oslo_concurrency.lockutils [req-84f62ff2-de87-48af-ba33-b261167fa5f2 req-901d68e3-c91e-4010-b221-485fc2fa4f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.466 233728 DEBUG oslo_concurrency.lockutils [req-84f62ff2-de87-48af-ba33-b261167fa5f2 req-901d68e3-c91e-4010-b221-485fc2fa4f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:29 np0005539552 nova_compute[233724]: 2025-11-29 08:39:29.466 233728 DEBUG nova.compute.manager [req-84f62ff2-de87-48af-ba33-b261167fa5f2 req-901d68e3-c91e-4010-b221-485fc2fa4f9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Processing event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:39:29 np0005539552 podman[311093]: 2025-11-29 08:39:29.619123441 +0000 UTC m=+0.048274751 container create ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:39:29 np0005539552 systemd[1]: Started libpod-conmon-ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1.scope.
Nov 29 03:39:29 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:39:29 np0005539552 podman[311093]: 2025-11-29 08:39:29.597161699 +0000 UTC m=+0.026313039 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:29 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932134c72d55dc0ad244bc599f254238fa1755fb24f7c64026dc5bc15b5756bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:29 np0005539552 podman[311093]: 2025-11-29 08:39:29.71122751 +0000 UTC m=+0.140378840 container init ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:39:29 np0005539552 podman[311093]: 2025-11-29 08:39:29.716595194 +0000 UTC m=+0.145746514 container start ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:39:29 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [NOTICE]   (311113) : New worker (311115) forked
Nov 29 03:39:29 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [NOTICE]   (311113) : Loading success.
Nov 29 03:39:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.102 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405570.1017506, ee3de57a-2652-4819-880a-6217c00a67a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.102 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.104 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.107 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.112 233728 INFO nova.virt.libvirt.driver [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Instance spawned successfully.#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.112 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.131 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.133 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.133 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.133 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.134 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.134 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.134 233728 DEBUG nova.virt.libvirt.driver [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.139 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.196 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.197 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405570.1018448, ee3de57a-2652-4819-880a-6217c00a67a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.197 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.223 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.227 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405570.1070898, ee3de57a-2652-4819-880a-6217c00a67a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.227 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.236 233728 INFO nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Took 6.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.236 233728 DEBUG nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.244 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.248 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.268 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.299 233728 INFO nova.compute.manager [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Took 8.59 seconds to build instance.#033[00m
Nov 29 03:39:30 np0005539552 nova_compute[233724]: 2025-11-29 08:39:30.314 233728 DEBUG oslo_concurrency.lockutils [None req-94b6275b-8684-43fc-98bd-1c29944624ce facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:30.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:30.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:31 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:31.856 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.057 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.090 233728 DEBUG nova.compute.manager [req-6faa4ab4-cc28-4373-ad0a-56043b0f783d req-757c132d-8064-4a2f-8577-0bbac43313c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.097 233728 DEBUG oslo_concurrency.lockutils [req-6faa4ab4-cc28-4373-ad0a-56043b0f783d req-757c132d-8064-4a2f-8577-0bbac43313c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.097 233728 DEBUG oslo_concurrency.lockutils [req-6faa4ab4-cc28-4373-ad0a-56043b0f783d req-757c132d-8064-4a2f-8577-0bbac43313c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.098 233728 DEBUG oslo_concurrency.lockutils [req-6faa4ab4-cc28-4373-ad0a-56043b0f783d req-757c132d-8064-4a2f-8577-0bbac43313c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.098 233728 DEBUG nova.compute.manager [req-6faa4ab4-cc28-4373-ad0a-56043b0f783d req-757c132d-8064-4a2f-8577-0bbac43313c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] No waiting events found dispatching network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.099 233728 WARNING nova.compute.manager [req-6faa4ab4-cc28-4373-ad0a-56043b0f783d req-757c132d-8064-4a2f-8577-0bbac43313c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received unexpected event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:32.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:32 np0005539552 nova_compute[233724]: 2025-11-29 08:39:32.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.651 233728 DEBUG oslo_concurrency.lockutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.651 233728 DEBUG oslo_concurrency.lockutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.664 233728 DEBUG nova.objects.instance [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.701 233728 DEBUG oslo_concurrency.lockutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.938 233728 DEBUG oslo_concurrency.lockutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.939 233728 DEBUG oslo_concurrency.lockutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:33 np0005539552 nova_compute[233724]: 2025-11-29 08:39:33.940 233728 INFO nova.compute.manager [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attaching volume 35daf555-cff7-4c13-98d4-8c31451470ee to /dev/vdb#033[00m
Nov 29 03:39:33 np0005539552 podman[311168]: 2025-11-29 08:39:33.981339511 +0000 UTC m=+0.069950554 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:39:33 np0005539552 podman[311169]: 2025-11-29 08:39:33.982898743 +0000 UTC m=+0.070543470 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 03:39:34 np0005539552 podman[311170]: 2025-11-29 08:39:34.027498063 +0000 UTC m=+0.103381403 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.110 233728 DEBUG os_brick.utils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.111 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.129 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.129 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[04f37481-74ed-42ec-9dee-18fb88af0493]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.131 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.141 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.141 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[27b1d644-5e7a-475e-9d1d-4495396e1f65]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.143 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.161 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.162 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec74b47-7722-4009-b9af-4837e8f2b215]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.163 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbb7b5a-e2c6-43e4-ab40-2825519996b6]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.164 233728 DEBUG oslo_concurrency.processutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.204 233728 DEBUG oslo_concurrency.processutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.207 233728 DEBUG os_brick.initiator.connectors.lightos [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.207 233728 DEBUG os_brick.initiator.connectors.lightos [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.207 233728 DEBUG os_brick.initiator.connectors.lightos [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.208 233728 DEBUG os_brick.utils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:39:34 np0005539552 nova_compute[233724]: 2025-11-29 08:39:34.209 233728 DEBUG nova.virt.block_device [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating existing volume attachment record: 9adbbac5-c417-4c86-b7b6-2d408e80ef7a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:39:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1317622863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.060 233728 DEBUG nova.objects.instance [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.098 233728 DEBUG nova.virt.libvirt.driver [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attempting to attach volume 35daf555-cff7-4c13-98d4-8c31451470ee with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.102 233728 DEBUG nova.virt.libvirt.guest [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-35daf555-cff7-4c13-98d4-8c31451470ee">
Nov 29 03:39:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:39:35 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:39:35 np0005539552 nova_compute[233724]:  <serial>35daf555-cff7-4c13-98d4-8c31451470ee</serial>
Nov 29 03:39:35 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:39:35 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.275 233728 DEBUG nova.virt.libvirt.driver [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.276 233728 DEBUG nova.virt.libvirt.driver [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.277 233728 DEBUG nova.virt.libvirt.driver [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.278 233728 DEBUG nova.virt.libvirt.driver [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:0b:a6:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.432 233728 DEBUG nova.compute.manager [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.433 233728 DEBUG nova.compute.manager [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.434 233728 DEBUG oslo_concurrency.lockutils [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.435 233728 DEBUG oslo_concurrency.lockutils [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.435 233728 DEBUG nova.network.neutron [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:35 np0005539552 nova_compute[233724]: 2025-11-29 08:39:35.516 233728 DEBUG oslo_concurrency.lockutils [None req-91741207-4d0b-40ec-a261-62c37205040f facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:36 np0005539552 nova_compute[233724]: 2025-11-29 08:39:36.832 233728 DEBUG oslo_concurrency.lockutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:36 np0005539552 nova_compute[233724]: 2025-11-29 08:39:36.834 233728 DEBUG oslo_concurrency.lockutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:36 np0005539552 nova_compute[233724]: 2025-11-29 08:39:36.857 233728 DEBUG nova.objects.instance [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:36.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:36 np0005539552 nova_compute[233724]: 2025-11-29 08:39:36.889 233728 DEBUG oslo_concurrency.lockutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.061 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.264 233728 DEBUG oslo_concurrency.lockutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.265 233728 DEBUG oslo_concurrency.lockutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.266 233728 INFO nova.compute.manager [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attaching volume 54fbb1b4-e0a6-4add-a8f8-3d00e149271f to /dev/vdc#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.455 233728 DEBUG os_brick.utils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.457 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.467 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.467 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e331994-a1d2-47a1-9b21-ba02ce71e1af]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.468 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.475 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.475 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0d7072-4a0e-43bd-98f2-c83be2dd8479]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.477 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.484 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.484 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[40054f67-7fa4-4faa-853c-279f8f234404]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.486 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6514479-9d08-4ec7-82ef-d5480890ce76]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.486 233728 DEBUG oslo_concurrency.processutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.516 233728 DEBUG oslo_concurrency.processutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.520 233728 DEBUG os_brick.initiator.connectors.lightos [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.521 233728 DEBUG os_brick.initiator.connectors.lightos [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.521 233728 DEBUG os_brick.initiator.connectors.lightos [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.522 233728 DEBUG os_brick.utils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.523 233728 DEBUG nova.virt.block_device [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating existing volume attachment record: ef76ea02-7b8f-4363-8cf6-5ad9e28b3ff4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:39:37 np0005539552 nova_compute[233724]: 2025-11-29 08:39:37.875 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934906898' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.394 233728 DEBUG nova.objects.instance [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.451 233728 DEBUG nova.virt.libvirt.driver [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attempting to attach volume 54fbb1b4-e0a6-4add-a8f8-3d00e149271f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.454 233728 DEBUG nova.virt.libvirt.guest [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-54fbb1b4-e0a6-4add-a8f8-3d00e149271f">
Nov 29 03:39:38 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  <auth username="openstack">
Nov 29 03:39:38 np0005539552 nova_compute[233724]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  </auth>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:39:38 np0005539552 nova_compute[233724]:  <serial>54fbb1b4-e0a6-4add-a8f8-3d00e149271f</serial>
Nov 29 03:39:38 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:39:38 np0005539552 nova_compute[233724]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:39:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.713 233728 DEBUG nova.network.neutron [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.714 233728 DEBUG nova.network.neutron [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.733 233728 DEBUG oslo_concurrency.lockutils [req-4e60b52c-5358-4213-bd4c-1f0740922555 req-6a26f8bb-694d-4a8e-a870-5111e8105579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:38.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.952 233728 DEBUG nova.virt.libvirt.driver [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.953 233728 DEBUG nova.virt.libvirt.driver [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.953 233728 DEBUG nova.virt.libvirt.driver [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.953 233728 DEBUG nova.virt.libvirt.driver [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:38 np0005539552 nova_compute[233724]: 2025-11-29 08:39:38.954 233728 DEBUG nova.virt.libvirt.driver [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:0b:a6:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:39:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2055336838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:39:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:39:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2055336838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:39:39 np0005539552 nova_compute[233724]: 2025-11-29 08:39:39.218 233728 DEBUG oslo_concurrency.lockutils [None req-8ffc58c8-cbbd-4ceb-890a-8833a3ca0da4 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:40.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.664 233728 DEBUG nova.compute.manager [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.666 233728 DEBUG nova.compute.manager [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing instance network info cache due to event network-changed-2769598c-70f6-487d-b82b-5c1920b0e91d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.667 233728 DEBUG oslo_concurrency.lockutils [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.668 233728 DEBUG oslo_concurrency.lockutils [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.669 233728 DEBUG nova.network.neutron [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Refreshing network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.814 233728 DEBUG nova.compute.manager [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.815 233728 DEBUG nova.compute.manager [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.815 233728 DEBUG oslo_concurrency.lockutils [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.815 233728 DEBUG oslo_concurrency.lockutils [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:41 np0005539552 nova_compute[233724]: 2025-11-29 08:39:41.816 233728 DEBUG nova.network.neutron [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:42 np0005539552 nova_compute[233724]: 2025-11-29 08:39:42.064 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:42.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:42 np0005539552 nova_compute[233724]: 2025-11-29 08:39:42.876 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:42 np0005539552 nova_compute[233724]: 2025-11-29 08:39:42.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.360 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.361 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.361 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.362 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.362 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.882 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.963 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.964 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.964 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.965 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.969 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.969 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:43 np0005539552 nova_compute[233724]: 2025-11-29 08:39:43.969 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.172 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.173 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3813MB free_disk=20.851093292236328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.174 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.174 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.212 233728 DEBUG nova.compute.manager [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.212 233728 DEBUG nova.compute.manager [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.212 233728 DEBUG oslo_concurrency.lockutils [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.266 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 608e1017-9da7-4ba6-a346-f047562d380b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.266 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance ee3de57a-2652-4819-880a-6217c00a67a0 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.267 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.267 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.317 233728 DEBUG nova.network.neutron [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updated VIF entry in instance network info cache for port 2769598c-70f6-487d-b82b-5c1920b0e91d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.317 233728 DEBUG nova.network.neutron [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [{"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.320 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.352 233728 DEBUG oslo_concurrency.lockutils [req-0212f89b-374a-4f07-82f1-e9a3e1b0922a req-a9a35c5a-c668-4840-85a2-ceb6e389ce94 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-608e1017-9da7-4ba6-a346-f047562d380b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.393 233728 DEBUG oslo_concurrency.lockutils [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.393 233728 DEBUG oslo_concurrency.lockutils [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.410 233728 INFO nova.compute.manager [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Detaching volume 4c2de305-8721-44a9-a539-b07eee5e101a#033[00m
Nov 29 03:39:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:44.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.558 233728 INFO nova.virt.block_device [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Attempting to driver detach volume 4c2de305-8721-44a9-a539-b07eee5e101a from mountpoint /dev/vdb#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.568 233728 DEBUG nova.virt.libvirt.driver [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Attempting to detach device vdb from instance 608e1017-9da7-4ba6-a346-f047562d380b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.569 233728 DEBUG nova.virt.libvirt.guest [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-4c2de305-8721-44a9-a539-b07eee5e101a">
Nov 29 03:39:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <serial>4c2de305-8721-44a9-a539-b07eee5e101a</serial>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:39:44 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.577 233728 INFO nova.virt.libvirt.driver [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully detached device vdb from instance 608e1017-9da7-4ba6-a346-f047562d380b from the persistent domain config.#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.577 233728 DEBUG nova.virt.libvirt.driver [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 608e1017-9da7-4ba6-a346-f047562d380b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.578 233728 DEBUG nova.virt.libvirt.guest [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-4c2de305-8721-44a9-a539-b07eee5e101a">
Nov 29 03:39:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <serial>4c2de305-8721-44a9-a539-b07eee5e101a</serial>
Nov 29 03:39:44 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:39:44 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:39:44 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.639 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405584.6395674, 608e1017-9da7-4ba6-a346-f047562d380b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.641 233728 DEBUG nova.virt.libvirt.driver [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 608e1017-9da7-4ba6-a346-f047562d380b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.643 233728 INFO nova.virt.libvirt.driver [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully detached device vdb from instance 608e1017-9da7-4ba6-a346-f047562d380b from the live domain config.#033[00m
Nov 29 03:39:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1959832157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.770 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.776 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.799 233728 DEBUG nova.objects.instance [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'flavor' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.801 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.829 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.830 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.831 233728 DEBUG nova.network.neutron [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated VIF entry in instance network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.832 233728 DEBUG nova.network.neutron [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.837 233728 DEBUG oslo_concurrency.lockutils [None req-3ee2461d-7f30-4e56-b2d0-6cec37ce9444 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.851 233728 DEBUG oslo_concurrency.lockutils [req-e9b5c363-306f-430c-adc5-4d1b44ee491f req-956a5532-42af-47a0-9632-d04ffe86b1be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.852 233728 DEBUG oslo_concurrency.lockutils [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:44 np0005539552 nova_compute[233724]: 2025-11-29 08:39:44.852 233728 DEBUG nova.network.neutron [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:44.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 03:39:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:45Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:a6:18 10.100.0.9
Nov 29 03:39:45 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:45Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:a6:18 10.100.0.9
Nov 29 03:39:46 np0005539552 nova_compute[233724]: 2025-11-29 08:39:46.522 233728 DEBUG nova.compute.manager [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:46 np0005539552 nova_compute[233724]: 2025-11-29 08:39:46.522 233728 DEBUG nova.compute.manager [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:46 np0005539552 nova_compute[233724]: 2025-11-29 08:39:46.523 233728 DEBUG oslo_concurrency.lockutils [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:46.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:46.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.065 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.457 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.458 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.501 233728 DEBUG nova.network.neutron [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated VIF entry in instance network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.502 233728 DEBUG nova.network.neutron [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.570 233728 DEBUG oslo_concurrency.lockutils [req-0f61adbf-48f8-4c7e-b991-5ab233c3ae70 req-04c77b0e-cedc-4e83-b423-c9e6a860011d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.571 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.577 233728 DEBUG oslo_concurrency.lockutils [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.577 233728 DEBUG nova.network.neutron [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.799 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.801 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.810 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.811 233728 INFO nova.compute.claims [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:39:47 np0005539552 nova_compute[233724]: 2025-11-29 08:39:47.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:48 np0005539552 nova_compute[233724]: 2025-11-29 08:39:48.127 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1089873920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:48.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:48 np0005539552 nova_compute[233724]: 2025-11-29 08:39:48.568 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:48 np0005539552 nova_compute[233724]: 2025-11-29 08:39:48.581 233728 DEBUG nova.compute.provider_tree [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/371112818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/371112818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:39:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:48.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:48 np0005539552 nova_compute[233724]: 2025-11-29 08:39:48.934 233728 DEBUG nova.scheduler.client.report [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:48 np0005539552 nova_compute[233724]: 2025-11-29 08:39:48.985 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:48 np0005539552 nova_compute[233724]: 2025-11-29 08:39:48.986 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.065 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.066 233728 DEBUG nova.network.neutron [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.095 233728 INFO nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.249 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:39:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.335 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.336 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.336 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.337 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.337 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.339 233728 INFO nova.compute.manager [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Terminating instance#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.341 233728 DEBUG nova.compute.manager [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.368 233728 DEBUG nova.policy [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.372 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.373 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.373 233728 INFO nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Creating image(s)#033[00m
Nov 29 03:39:49 np0005539552 kernel: tap2769598c-70 (unregistering): left promiscuous mode
Nov 29 03:39:49 np0005539552 NetworkManager[48926]: <info>  [1764405589.4057] device (tap2769598c-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.410 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:49Z|00844|binding|INFO|Releasing lport 2769598c-70f6-487d-b82b-5c1920b0e91d from this chassis (sb_readonly=0)
Nov 29 03:39:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:49Z|00845|binding|INFO|Setting lport 2769598c-70f6-487d-b82b-5c1920b0e91d down in Southbound
Nov 29 03:39:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:49Z|00846|binding|INFO|Removing iface tap2769598c-70 ovn-installed in OVS
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.426 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d3:7a 10.100.0.12'], port_security=['fa:16:3e:96:d3:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '608e1017-9da7-4ba6-a346-f047562d380b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0430aba5-d0d7-4d98-ad87-552e6639c190', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e30b74e6449dd90ecb667977d1fe9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '52d8fe9a-0c55-4eea-ab3d-17059ad4962d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75451e9b-c915-4a2b-97ed-6cc2296328f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=2769598c-70f6-487d-b82b-5c1920b0e91d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.427 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 2769598c-70f6-487d-b82b-5c1920b0e91d in datapath 0430aba5-d0d7-4d98-ad87-552e6639c190 unbound from our chassis#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.428 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0430aba5-d0d7-4d98-ad87-552e6639c190, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.430 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6a003f2b-074c-499f-b951-852e44924c0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.430 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 namespace which is not needed anymore#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.445 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:49 np0005539552 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Nov 29 03:39:49 np0005539552 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b7.scope: Consumed 15.796s CPU time.
Nov 29 03:39:49 np0005539552 systemd-machined[196379]: Machine qemu-85-instance-000000b7 terminated.
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.484 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.488 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.517 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [NOTICE]   (310554) : haproxy version is 2.8.14-c23fe91
Nov 29 03:39:49 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [NOTICE]   (310554) : path to executable is /usr/sbin/haproxy
Nov 29 03:39:49 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [WARNING]  (310554) : Exiting Master process...
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.561 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:49 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [ALERT]    (310554) : Current worker (310556) exited with code 143 (Terminated)
Nov 29 03:39:49 np0005539552 neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190[310550]: [WARNING]  (310554) : All workers exited. Exiting... (0)
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.562 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.563 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:49 np0005539552 systemd[1]: libpod-3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778.scope: Deactivated successfully.
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.563 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:49 np0005539552 podman[311535]: 2025-11-29 08:39:49.570090961 +0000 UTC m=+0.045912847 container died 3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.593 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778-userdata-shm.mount: Deactivated successfully.
Nov 29 03:39:49 np0005539552 systemd[1]: var-lib-containers-storage-overlay-07eda304fb912086f3215d499bff50ee06a6a4cf0e824fbbdac30fffc943709e-merged.mount: Deactivated successfully.
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.603 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:49 np0005539552 podman[311535]: 2025-11-29 08:39:49.614151187 +0000 UTC m=+0.089973073 container cleanup 3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:49 np0005539552 systemd[1]: libpod-conmon-3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778.scope: Deactivated successfully.
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.635 233728 INFO nova.virt.libvirt.driver [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Instance destroyed successfully.#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.635 233728 DEBUG nova.objects.instance [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lazy-loading 'resources' on Instance uuid 608e1017-9da7-4ba6-a346-f047562d380b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.659 233728 DEBUG nova.virt.libvirt.vif [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2075426497',display_name='tempest-TestMinimumBasicScenario-server-2075426497',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2075426497',id=183,image_ref='91e66a4f-7b40-4f67-810c-3642968fad68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEB08XnJEDg+1P30+zEgbUnzYodIZ3g6KGGPd+kzRxtGlihlN0qT4raS3B+Ikqn+VjW8vQOCmeKi8zzD2w95k51XZW3izu1i7RTrHUi/m2K00I63bNtT+aDtKNR89OWXnQ==',key_name='tempest-TestMinimumBasicScenario-1327155599',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:38:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e30b74e6449dd90ecb667977d1fe9',ramdisk_id='',reservation_id='r-rh2ezghd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='91e66a4f-7b40-4f67-810c-3642968fad68',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1569311049',owner_user_name='tempest-TestMinimumBasicScenario-1569311049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:38:57Z,user_data=None,user_id='e8b20745b2d14f70b64a43335faed2f4',uuid=608e1017-9da7-4ba6-a346-f047562d380b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.659 233728 DEBUG nova.network.os_vif_util [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converting VIF {"id": "2769598c-70f6-487d-b82b-5c1920b0e91d", "address": "fa:16:3e:96:d3:7a", "network": {"id": "0430aba5-d0d7-4d98-ad87-552e6639c190", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1917226298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e30b74e6449dd90ecb667977d1fe9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2769598c-70", "ovs_interfaceid": "2769598c-70f6-487d-b82b-5c1920b0e91d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.660 233728 DEBUG nova.network.os_vif_util [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.660 233728 DEBUG os_vif [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.663 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.663 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2769598c-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.665 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.667 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.669 233728 INFO os_vif [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:d3:7a,bridge_name='br-int',has_traffic_filtering=True,id=2769598c-70f6-487d-b82b-5c1920b0e91d,network=Network(0430aba5-d0d7-4d98-ad87-552e6639c190),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2769598c-70')#033[00m
Nov 29 03:39:49 np0005539552 podman[311594]: 2025-11-29 08:39:49.680316628 +0000 UTC m=+0.043851722 container remove 3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.685 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0b273a-b39b-4347-8eba-238669652afc]: (4, ('Sat Nov 29 08:39:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778)\n3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778\nSat Nov 29 08:39:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 (3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778)\n3bff04941459125e7f0affae535ed6d45f09c9fafb0e604aa1779e66d240a778\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.686 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3647df7d-c0be-4054-a71e-105c641e74d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.687 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0430aba5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.689 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539552 kernel: tap0430aba5-d0: left promiscuous mode
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.704 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.705 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f15a9629-b88e-4885-9798-1298dd9b6a75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.723 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[940b3e21-e8b0-4ff7-8eed-8c82e6b2cd14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.724 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[43e22ff5-79ed-4206-a0af-faa5fa0fc732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.739 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[102ba905-0123-4dd8-b23d-5941dbbf2185]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858268, 'reachable_time': 43282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311643, 'error': None, 'target': 'ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.742 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0430aba5-d0d7-4d98-ad87-552e6639c190 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:39:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:49.742 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b2c442-cc3a-486d-b06a-65efa52fc6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:49 np0005539552 systemd[1]: run-netns-ovnmeta\x2d0430aba5\x2dd0d7\x2d4d98\x2dad87\x2d552e6639c190.mount: Deactivated successfully.
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.760 233728 DEBUG nova.compute.manager [req-7de0d8b3-308d-4150-8c0c-8aa7d99a9271 req-83e0331b-218e-490a-850f-a6815b50512f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-unplugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.761 233728 DEBUG oslo_concurrency.lockutils [req-7de0d8b3-308d-4150-8c0c-8aa7d99a9271 req-83e0331b-218e-490a-850f-a6815b50512f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.761 233728 DEBUG oslo_concurrency.lockutils [req-7de0d8b3-308d-4150-8c0c-8aa7d99a9271 req-83e0331b-218e-490a-850f-a6815b50512f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.762 233728 DEBUG oslo_concurrency.lockutils [req-7de0d8b3-308d-4150-8c0c-8aa7d99a9271 req-83e0331b-218e-490a-850f-a6815b50512f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.762 233728 DEBUG nova.compute.manager [req-7de0d8b3-308d-4150-8c0c-8aa7d99a9271 req-83e0331b-218e-490a-850f-a6815b50512f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-unplugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.762 233728 DEBUG nova.compute.manager [req-7de0d8b3-308d-4150-8c0c-8aa7d99a9271 req-83e0331b-218e-490a-850f-a6815b50512f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-unplugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.834 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.835 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539552 nova_compute[233724]: 2025-11-29 08:39:49.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:50 np0005539552 nova_compute[233724]: 2025-11-29 08:39:50.188 233728 DEBUG nova.network.neutron [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated VIF entry in instance network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:50 np0005539552 nova_compute[233724]: 2025-11-29 08:39:50.189 233728 DEBUG nova.network.neutron [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:50.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:50.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:50 np0005539552 nova_compute[233724]: 2025-11-29 08:39:50.958 233728 DEBUG oslo_concurrency.lockutils [req-05d775fb-653a-4c17-96d8-d99a1cf91e2e req-e8fa0906-b028-45e3-aa70-080f7af19994 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 03:39:51 np0005539552 nova_compute[233724]: 2025-11-29 08:39:51.458 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:51 np0005539552 nova_compute[233724]: 2025-11-29 08:39:51.538 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.152 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.248 233728 DEBUG nova.objects.instance [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid fdf22d9d-2476-4990-b493-ae3ab31a8bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.258 233728 DEBUG nova.network.neutron [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Successfully created port: 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.283 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.284 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Ensure instance console log exists: /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.284 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.284 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.285 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.555 233728 DEBUG nova.compute.manager [req-d55b8a06-26de-4468-afbb-9f4c976a567c req-8037e252-1ea8-4679-8f74-7814197a1433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.555 233728 DEBUG oslo_concurrency.lockutils [req-d55b8a06-26de-4468-afbb-9f4c976a567c req-8037e252-1ea8-4679-8f74-7814197a1433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "608e1017-9da7-4ba6-a346-f047562d380b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.556 233728 DEBUG oslo_concurrency.lockutils [req-d55b8a06-26de-4468-afbb-9f4c976a567c req-8037e252-1ea8-4679-8f74-7814197a1433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.556 233728 DEBUG oslo_concurrency.lockutils [req-d55b8a06-26de-4468-afbb-9f4c976a567c req-8037e252-1ea8-4679-8f74-7814197a1433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.556 233728 DEBUG nova.compute.manager [req-d55b8a06-26de-4468-afbb-9f4c976a567c req-8037e252-1ea8-4679-8f74-7814197a1433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] No waiting events found dispatching network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.557 233728 WARNING nova.compute.manager [req-d55b8a06-26de-4468-afbb-9f4c976a567c req-8037e252-1ea8-4679-8f74-7814197a1433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received unexpected event network-vif-plugged-2769598c-70f6-487d-b82b-5c1920b0e91d for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:39:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:52.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:52.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:52 np0005539552 nova_compute[233724]: 2025-11-29 08:39:52.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.424 233728 INFO nova.virt.libvirt.driver [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Deleting instance files /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b_del#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.426 233728 INFO nova.virt.libvirt.driver [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Deletion of /var/lib/nova/instances/608e1017-9da7-4ba6-a346-f047562d380b_del complete#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.541 233728 DEBUG nova.compute.manager [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.542 233728 DEBUG nova.compute.manager [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.543 233728 DEBUG oslo_concurrency.lockutils [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.543 233728 DEBUG oslo_concurrency.lockutils [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.544 233728 DEBUG nova.network.neutron [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.598 233728 INFO nova.compute.manager [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Took 4.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.599 233728 DEBUG oslo.service.loopingcall [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.600 233728 DEBUG nova.compute.manager [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.600 233728 DEBUG nova.network.neutron [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:39:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:53 np0005539552 nova_compute[233724]: 2025-11-29 08:39:53.771 233728 DEBUG nova.network.neutron [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Successfully updated port: 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.101 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-fdf22d9d-2476-4990-b493-ae3ab31a8bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.101 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-fdf22d9d-2476-4990-b493-ae3ab31a8bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.102 233728 DEBUG nova.network.neutron [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.301 233728 DEBUG nova.network.neutron [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.455 233728 DEBUG nova.network.neutron [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.475 233728 INFO nova.compute.manager [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.529 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.529 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:54.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.685 233728 DEBUG nova.compute.manager [req-f5f7d8a2-25a8-4662-89da-3921898828bf req-34405eba-3793-4a70-a089-cc2d0948d435 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Received event network-vif-deleted-2769598c-70f6-487d-b82b-5c1920b0e91d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.686 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.789 233728 DEBUG oslo_concurrency.processutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:54.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:54 np0005539552 nova_compute[233724]: 2025-11-29 08:39:54.920 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2618872778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.215 233728 DEBUG oslo_concurrency.processutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.226 233728 DEBUG nova.compute.provider_tree [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.252 233728 DEBUG nova.scheduler.client.report [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.280 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.303 233728 INFO nova.scheduler.client.report [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Deleted allocations for instance 608e1017-9da7-4ba6-a346-f047562d380b#033[00m
Nov 29 03:39:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.359 233728 DEBUG nova.network.neutron [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Updating instance_info_cache with network_info: [{"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.405 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-fdf22d9d-2476-4990-b493-ae3ab31a8bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.406 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Instance network_info: |[{"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.408 233728 DEBUG nova.network.neutron [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated VIF entry in instance network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.409 233728 DEBUG nova.network.neutron [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.415 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Start _get_guest_xml network_info=[{"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.421 233728 DEBUG oslo_concurrency.lockutils [None req-15d9642b-8972-4d5b-8865-4bf733757c13 e8b20745b2d14f70b64a43335faed2f4 8d5e30b74e6449dd90ecb667977d1fe9 - - default default] Lock "608e1017-9da7-4ba6-a346-f047562d380b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.426 233728 DEBUG oslo_concurrency.lockutils [req-62b7d0bb-cb0f-44ce-a3fd-cc7106578919 req-4c72382f-92c2-48ac-877c-5eb90a5b62fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.428 233728 WARNING nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.436 233728 DEBUG nova.virt.libvirt.host [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.437 233728 DEBUG nova.virt.libvirt.host [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.442 233728 DEBUG nova.virt.libvirt.host [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.442 233728 DEBUG nova.virt.libvirt.host [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.444 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.444 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.445 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.446 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.446 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.446 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.447 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.448 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.449 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.450 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.451 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.451 233728 DEBUG nova.virt.hardware [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.457 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.736 233728 DEBUG nova.compute.manager [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-changed-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.737 233728 DEBUG nova.compute.manager [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Refreshing instance network info cache due to event network-changed-7528e8e3-686a-4176-98d8-1f4f2e96b5c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.737 233728 DEBUG oslo_concurrency.lockutils [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fdf22d9d-2476-4990-b493-ae3ab31a8bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.738 233728 DEBUG oslo_concurrency.lockutils [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fdf22d9d-2476-4990-b493-ae3ab31a8bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.738 233728 DEBUG nova.network.neutron [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Refreshing network info cache for port 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2514443920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.883 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.912 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:55 np0005539552 nova_compute[233724]: 2025-11-29 08:39:55.915 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1916490888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.323 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.325 233728 DEBUG nova.virt.libvirt.vif [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1728799900',display_name='tempest-TestNetworkBasicOps-server-1728799900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1728799900',id=189,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEojQ/ihonNAxFl4nl8U5Nw8RaAwW1wcuM26oNQCtjyh9KayiU3RHToagneSHpUhEsGczqXRW6M+JoXTxDiy7I/4W8BpV9C3MBvNlpVX23JxQejvPKfmFeVLLv8VNJx4Q==',key_name='tempest-TestNetworkBasicOps-1631943157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-kqqyxra5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:49Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=fdf22d9d-2476-4990-b493-ae3ab31a8bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.326 233728 DEBUG nova.network.os_vif_util [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.327 233728 DEBUG nova.network.os_vif_util [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.329 233728 DEBUG nova.objects.instance [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid fdf22d9d-2476-4990-b493-ae3ab31a8bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.482 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <uuid>fdf22d9d-2476-4990-b493-ae3ab31a8bb8</uuid>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <name>instance-000000bd</name>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkBasicOps-server-1728799900</nova:name>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:39:55</nova:creationTime>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <nova:port uuid="7528e8e3-686a-4176-98d8-1f4f2e96b5c0">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <entry name="serial">fdf22d9d-2476-4990-b493-ae3ab31a8bb8</entry>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <entry name="uuid">fdf22d9d-2476-4990-b493-ae3ab31a8bb8</entry>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk.config">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:f4:bd:c7"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <target dev="tap7528e8e3-68"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/console.log" append="off"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:39:56 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:39:56 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:39:56 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:39:56 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.483 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Preparing to wait for external event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.484 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.485 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.485 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.486 233728 DEBUG nova.virt.libvirt.vif [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1728799900',display_name='tempest-TestNetworkBasicOps-server-1728799900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1728799900',id=189,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEojQ/ihonNAxFl4nl8U5Nw8RaAwW1wcuM26oNQCtjyh9KayiU3RHToagneSHpUhEsGczqXRW6M+JoXTxDiy7I/4W8BpV9C3MBvNlpVX23JxQejvPKfmFeVLLv8VNJx4Q==',key_name='tempest-TestNetworkBasicOps-1631943157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-kqqyxra5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:49Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=fdf22d9d-2476-4990-b493-ae3ab31a8bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.486 233728 DEBUG nova.network.os_vif_util [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.487 233728 DEBUG nova.network.os_vif_util [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.488 233728 DEBUG os_vif [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.489 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.489 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.490 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.494 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.495 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7528e8e3-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.496 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7528e8e3-68, col_values=(('external_ids', {'iface-id': '7528e8e3-686a-4176-98d8-1f4f2e96b5c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:bd:c7', 'vm-uuid': 'fdf22d9d-2476-4990-b493-ae3ab31a8bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.497 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539552 NetworkManager[48926]: <info>  [1764405596.4984] manager: (tap7528e8e3-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.504 233728 INFO os_vif [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68')#033[00m
Nov 29 03:39:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:56.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.696 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.697 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.697 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:f4:bd:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.698 233728 INFO nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Using config drive#033[00m
Nov 29 03:39:56 np0005539552 nova_compute[233724]: 2025-11-29 08:39:56.733 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.071 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.397 233728 INFO nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Creating config drive at /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/disk.config#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.408 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0i1my3sj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.457 233728 DEBUG nova.network.neutron [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Updated VIF entry in instance network info cache for port 7528e8e3-686a-4176-98d8-1f4f2e96b5c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.458 233728 DEBUG nova.network.neutron [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Updating instance_info_cache with network_info: [{"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.484 233728 DEBUG oslo_concurrency.lockutils [req-0e336ad0-27f8-49e3-aa63-7a0153161728 req-51c53faf-cffc-4d77-91b9-2219a4d87e05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fdf22d9d-2476-4990-b493-ae3ab31a8bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.572 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0i1my3sj" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.625 233728 DEBUG nova.storage.rbd_utils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.632 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/disk.config fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.683 233728 DEBUG oslo_concurrency.lockutils [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.684 233728 DEBUG oslo_concurrency.lockutils [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.714 233728 INFO nova.compute.manager [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Detaching volume 35daf555-cff7-4c13-98d4-8c31451470ee#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.858 233728 DEBUG oslo_concurrency.processutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/disk.config fdf22d9d-2476-4990-b493-ae3ab31a8bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.859 233728 INFO nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Deleting local config drive /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.896 233728 INFO nova.virt.block_device [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attempting to driver detach volume 35daf555-cff7-4c13-98d4-8c31451470ee from mountpoint /dev/vdb#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.909 233728 DEBUG nova.virt.libvirt.driver [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Attempting to detach device vdb from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.910 233728 DEBUG nova.virt.libvirt.guest [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-35daf555-cff7-4c13-98d4-8c31451470ee">
Nov 29 03:39:57 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <serial>35daf555-cff7-4c13-98d4-8c31451470ee</serial>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:39:57 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:39:57 np0005539552 kernel: tap7528e8e3-68: entered promiscuous mode
Nov 29 03:39:57 np0005539552 NetworkManager[48926]: <info>  [1764405597.9166] manager: (tap7528e8e3-68): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.918 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:57Z|00847|binding|INFO|Claiming lport 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 for this chassis.
Nov 29 03:39:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:57Z|00848|binding|INFO|7528e8e3-686a-4176-98d8-1f4f2e96b5c0: Claiming fa:16:3e:f4:bd:c7 10.100.0.27
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.924 233728 INFO nova.virt.libvirt.driver [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdb from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the persistent domain config.#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.925 233728 DEBUG nova.virt.libvirt.driver [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.926 233728 DEBUG nova.virt.libvirt.guest [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-35daf555-cff7-4c13-98d4-8c31451470ee">
Nov 29 03:39:57 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <serial>35daf555-cff7-4c13-98d4-8c31451470ee</serial>
Nov 29 03:39:57 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:39:57 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:39:57 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.931 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:bd:c7 10.100.0.27'], port_security=['fa:16:3e:f4:bd:c7 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'fdf22d9d-2476-4990-b493-ae3ab31a8bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02b3859c-c921-45be-8567-c9fda4e094c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7843eb0e-3b68-48d7-b889-5bece517c173, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=7528e8e3-686a-4176-98d8-1f4f2e96b5c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.932 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 in datapath f3d6a66c-1acd-4ae3-9639-b6444469c1fc bound to our chassis#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.934 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3d6a66c-1acd-4ae3-9639-b6444469c1fc#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.947 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[494ad96e-b1e4-4d26-85dc-49a64cdce8b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.948 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3d6a66c-11 in ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.949 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3d6a66c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.949 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fa4555-6dad-4f09-901d-693f4d3aa34b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.950 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ed42f062-6df0-4cfd-b6a3-9437592677fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:57 np0005539552 systemd-udevd[311882]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:57 np0005539552 NetworkManager[48926]: <info>  [1764405597.9634] device (tap7528e8e3-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:57 np0005539552 NetworkManager[48926]: <info>  [1764405597.9642] device (tap7528e8e3-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:57 np0005539552 systemd-machined[196379]: New machine qemu-87-instance-000000bd.
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.964 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[325dc319-abea-4edd-961f-61583f0e8d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:57 np0005539552 systemd[1]: Started Virtual Machine qemu-87-instance-000000bd.
Nov 29 03:39:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:57.990 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2246d4fe-6cd9-43b6-b7cb-b382ec4ea683]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.994 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:57Z|00849|binding|INFO|Setting lport 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 ovn-installed in OVS
Nov 29 03:39:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:57Z|00850|binding|INFO|Setting lport 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 up in Southbound
Nov 29 03:39:57 np0005539552 nova_compute[233724]: 2025-11-29 08:39:57.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.018 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[de0c7184-993f-430d-b658-d44f20cff887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.024 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc64c9a-8991-47e4-b301-e53c47586b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 NetworkManager[48926]: <info>  [1764405598.0257] manager: (tapf3d6a66c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.064 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1b9b5b-17dd-4a66-a574-b8d0f16270f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.066 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405598.066199, ee3de57a-2652-4819-880a-6217c00a67a0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.068 233728 DEBUG nova.virt.libvirt.driver [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ee3de57a-2652-4819-880a-6217c00a67a0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.068 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[509911a0-d74b-4c70-98b1-cf277b197e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.071 233728 INFO nova.virt.libvirt.driver [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdb from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the live domain config.#033[00m
Nov 29 03:39:58 np0005539552 NetworkManager[48926]: <info>  [1764405598.0915] device (tapf3d6a66c-10): carrier: link connected
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.098 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3fe760-c995-46a8-a99e-70f31a982653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.114 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[198a025a-622b-4985-a111-3907351691ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3d6a66c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:1b:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 864377, 'reachable_time': 36298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311917, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.129 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[77607a26-2dd9-4a46-b9ab-6ebea1045f8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:1bf5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 864377, 'tstamp': 864377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311918, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.144 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[40cddce4-9e95-4156-aef0-04ec3e13eedd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3d6a66c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:1b:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 864377, 'reachable_time': 36298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311919, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.169 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[234e4200-685e-423f-86df-25b15d98b403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.229 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d6aaa54a-1ba1-4edb-b544-87b3389eb74f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.232 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3d6a66c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.234 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.235 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3d6a66c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.238 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539552 kernel: tapf3d6a66c-10: entered promiscuous mode
Nov 29 03:39:58 np0005539552 NetworkManager[48926]: <info>  [1764405598.2391] manager: (tapf3d6a66c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.241 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3d6a66c-10, col_values=(('external_ids', {'iface-id': '1e240f8f-8745-4fcb-b4a3-c32894f2f8b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.242 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:39:58Z|00851|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.243 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.244 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.246 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6102f28e-1da8-4b81-ad59-d941594f9bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.247 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-f3d6a66c-1acd-4ae3-9639-b6444469c1fc
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.pid.haproxy
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID f3d6a66c-1acd-4ae3-9639-b6444469c1fc
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:58 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:39:58.248 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'env', 'PROCESS_TAG=haproxy-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3d6a66c-1acd-4ae3-9639-b6444469c1fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.257 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.262 233728 DEBUG nova.objects.instance [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.318 233728 DEBUG oslo_concurrency.lockutils [None req-9148c0f7-52a2-4da9-b27f-631ce63f3e33 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.547 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405598.5466592, fdf22d9d-2476-4990-b493-ae3ab31a8bb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.547 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:39:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:58.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.585 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.590 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405598.5514917, fdf22d9d-2476-4990-b493-ae3ab31a8bb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.590 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.617 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.621 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:58 np0005539552 podman[311991]: 2025-11-29 08:39:58.642887466 +0000 UTC m=+0.066347987 container create 059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.649 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:58 np0005539552 systemd[1]: Started libpod-conmon-059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f.scope.
Nov 29 03:39:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:58 np0005539552 podman[311991]: 2025-11-29 08:39:58.605264474 +0000 UTC m=+0.028725055 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:58 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:39:58 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947c0b9d9c599a0442526ce2e678cd41de4e1494ab55d48db37dfcc8d6d0bec4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:58 np0005539552 podman[311991]: 2025-11-29 08:39:58.748639733 +0000 UTC m=+0.172100254 container init 059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:39:58 np0005539552 podman[311991]: 2025-11-29 08:39:58.759184107 +0000 UTC m=+0.182644618 container start 059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:58 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [NOTICE]   (312010) : New worker (312012) forked
Nov 29 03:39:58 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [NOTICE]   (312010) : Loading success.
Nov 29 03:39:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:39:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:58.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:39:58 np0005539552 nova_compute[233724]: 2025-11-29 08:39:58.962 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:39:59 np0005539552 nova_compute[233724]: 2025-11-29 08:39:59.140 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:59 np0005539552 nova_compute[233724]: 2025-11-29 08:39:59.140 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:59 np0005539552 nova_compute[233724]: 2025-11-29 08:39:59.141 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:39:59 np0005539552 nova_compute[233724]: 2025-11-29 08:39:59.141 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.121 233728 DEBUG nova.compute.manager [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.122 233728 DEBUG oslo_concurrency.lockutils [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.122 233728 DEBUG oslo_concurrency.lockutils [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.122 233728 DEBUG oslo_concurrency.lockutils [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.123 233728 DEBUG nova.compute.manager [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Processing event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.123 233728 DEBUG nova.compute.manager [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.123 233728 DEBUG oslo_concurrency.lockutils [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.124 233728 DEBUG oslo_concurrency.lockutils [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.124 233728 DEBUG oslo_concurrency.lockutils [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.124 233728 DEBUG nova.compute.manager [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] No waiting events found dispatching network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.124 233728 WARNING nova.compute.manager [req-ecdfd53b-e3ab-419b-991d-f6850f52ce5e req-74d2895e-c38d-45e7-ba73-d96c8733f43d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received unexpected event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.125 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.129 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405600.1289554, fdf22d9d-2476-4990-b493-ae3ab31a8bb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.129 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.131 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.135 233728 INFO nova.virt.libvirt.driver [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Instance spawned successfully.#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.136 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.159 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.163 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.163 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.164 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.164 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.165 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.165 233728 DEBUG nova.virt.libvirt.driver [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.169 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.216 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.273 233728 INFO nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Took 10.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.273 233728 DEBUG nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.348 233728 INFO nova.compute.manager [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Took 12.56 seconds to build instance.#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.381 233728 DEBUG oslo_concurrency.lockutils [None req-6baf7570-a074-450a-8816-fa88b3fb1f95 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:00.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.605 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.626 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.627 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:40:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 03:40:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:00.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.993 233728 DEBUG oslo_concurrency.lockutils [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:00 np0005539552 nova_compute[233724]: 2025-11-29 08:40:00.994 233728 DEBUG oslo_concurrency.lockutils [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.016 233728 INFO nova.compute.manager [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Detaching volume 54fbb1b4-e0a6-4add-a8f8-3d00e149271f#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.196 233728 INFO nova.virt.block_device [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Attempting to driver detach volume 54fbb1b4-e0a6-4add-a8f8-3d00e149271f from mountpoint /dev/vdc#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.210 233728 DEBUG nova.virt.libvirt.driver [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Attempting to detach device vdc from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.211 233728 DEBUG nova.virt.libvirt.guest [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-54fbb1b4-e0a6-4add-a8f8-3d00e149271f">
Nov 29 03:40:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <serial>54fbb1b4-e0a6-4add-a8f8-3d00e149271f</serial>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:40:01 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.221 233728 INFO nova.virt.libvirt.driver [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdc from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the persistent domain config.#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.221 233728 DEBUG nova.virt.libvirt.driver [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.222 233728 DEBUG nova.virt.libvirt.guest [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <source protocol="rbd" name="volumes/volume-54fbb1b4-e0a6-4add-a8f8-3d00e149271f">
Nov 29 03:40:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  </source>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <serial>54fbb1b4-e0a6-4add-a8f8-3d00e149271f</serial>
Nov 29 03:40:01 np0005539552 nova_compute[233724]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:40:01 np0005539552 nova_compute[233724]: </disk>
Nov 29 03:40:01 np0005539552 nova_compute[233724]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.358 233728 DEBUG nova.virt.libvirt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Received event <DeviceRemovedEvent: 1764405601.3583264, ee3de57a-2652-4819-880a-6217c00a67a0 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.362 233728 DEBUG nova.virt.libvirt.driver [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance ee3de57a-2652-4819-880a-6217c00a67a0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.364 233728 INFO nova.virt.libvirt.driver [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdc from instance ee3de57a-2652-4819-880a-6217c00a67a0 from the live domain config.#033[00m
Nov 29 03:40:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:01Z|00852|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:40:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:01Z|00853|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.552 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.602 233728 DEBUG nova.objects.instance [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:01 np0005539552 nova_compute[233724]: 2025-11-29 08:40:01.636 233728 DEBUG oslo_concurrency.lockutils [None req-07e9e7d0-8417-4155-b070-a05c07f97d75 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:02 np0005539552 nova_compute[233724]: 2025-11-29 08:40:02.073 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:02.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:03 np0005539552 nova_compute[233724]: 2025-11-29 08:40:03.096 233728 DEBUG nova.compute.manager [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:03 np0005539552 nova_compute[233724]: 2025-11-29 08:40:03.096 233728 DEBUG nova.compute.manager [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:03 np0005539552 nova_compute[233724]: 2025-11-29 08:40:03.097 233728 DEBUG oslo_concurrency.lockutils [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:03 np0005539552 nova_compute[233724]: 2025-11-29 08:40:03.097 233728 DEBUG oslo_concurrency.lockutils [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:03 np0005539552 nova_compute[233724]: 2025-11-29 08:40:03.097 233728 DEBUG nova.network.neutron [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/748372546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/748372546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:04 np0005539552 nova_compute[233724]: 2025-11-29 08:40:04.632 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405589.5741746, 608e1017-9da7-4ba6-a346-f047562d380b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:04 np0005539552 nova_compute[233724]: 2025-11-29 08:40:04.633 233728 INFO nova.compute.manager [-] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:40:04 np0005539552 nova_compute[233724]: 2025-11-29 08:40:04.659 233728 DEBUG nova.compute.manager [None req-9395fe67-b990-4d39-8c9a-d6282de0e6e2 - - - - - -] [instance: 608e1017-9da7-4ba6-a346-f047562d380b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:04.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:05 np0005539552 podman[312029]: 2025-11-29 08:40:05.036267191 +0000 UTC m=+0.095815851 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:05 np0005539552 podman[312030]: 2025-11-29 08:40:05.037416952 +0000 UTC m=+0.092558183 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:40:05 np0005539552 podman[312031]: 2025-11-29 08:40:05.12098953 +0000 UTC m=+0.165672319 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:40:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 03:40:05 np0005539552 nova_compute[233724]: 2025-11-29 08:40:05.404 233728 DEBUG nova.network.neutron [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated VIF entry in instance network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:05 np0005539552 nova_compute[233724]: 2025-11-29 08:40:05.405 233728 DEBUG nova.network.neutron [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:05 np0005539552 nova_compute[233724]: 2025-11-29 08:40:05.425 233728 DEBUG oslo_concurrency.lockutils [req-e3369868-eb3e-45d3-8e6e-602c0a152fcb req-9d2a0626-9551-44f4-962e-08f9e8b4de9c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:06 np0005539552 nova_compute[233724]: 2025-11-29 08:40:06.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:06Z|00854|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:06Z|00855|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539552 nova_compute[233724]: 2025-11-29 08:40:06.599 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:06Z|00856|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:06Z|00857|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539552 nova_compute[233724]: 2025-11-29 08:40:06.868 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:07 np0005539552 nova_compute[233724]: 2025-11-29 08:40:07.076 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:08.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:08.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920247134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:11 np0005539552 nova_compute[233724]: 2025-11-29 08:40:11.502 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539552 nova_compute[233724]: 2025-11-29 08:40:12.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:12.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 03:40:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:14Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:bd:c7 10.100.0.27
Nov 29 03:40:14 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:14Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:bd:c7 10.100.0.27
Nov 29 03:40:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:14.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1750574917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:16 np0005539552 nova_compute[233724]: 2025-11-29 08:40:16.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:16.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:16.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:17 np0005539552 nova_compute[233724]: 2025-11-29 08:40:17.080 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:18.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:18.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:20.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:20.646 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:20.647 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:20.647 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:20.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:20 np0005539552 nova_compute[233724]: 2025-11-29 08:40:20.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:21 np0005539552 nova_compute[233724]: 2025-11-29 08:40:21.506 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:22 np0005539552 nova_compute[233724]: 2025-11-29 08:40:22.082 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:22.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:24 np0005539552 nova_compute[233724]: 2025-11-29 08:40:24.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:24 np0005539552 NetworkManager[48926]: <info>  [1764405624.8329] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Nov 29 03:40:24 np0005539552 NetworkManager[48926]: <info>  [1764405624.8349] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 29 03:40:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:24.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:25 np0005539552 nova_compute[233724]: 2025-11-29 08:40:25.058 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:25Z|00858|binding|INFO|Releasing lport 1e240f8f-8745-4fcb-b4a3-c32894f2f8b3 from this chassis (sb_readonly=0)
Nov 29 03:40:25 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:25Z|00859|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:25 np0005539552 nova_compute[233724]: 2025-11-29 08:40:25.084 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 03:40:26 np0005539552 nova_compute[233724]: 2025-11-29 08:40:26.508 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:26.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:27 np0005539552 nova_compute[233724]: 2025-11-29 08:40:27.085 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539552 nova_compute[233724]: 2025-11-29 08:40:28.578 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:28.580 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:28.581 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:40:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:28.583 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:30 np0005539552 nova_compute[233724]: 2025-11-29 08:40:30.329 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:30.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:30.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:31 np0005539552 nova_compute[233724]: 2025-11-29 08:40:31.511 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:32 np0005539552 nova_compute[233724]: 2025-11-29 08:40:32.088 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:32 np0005539552 nova_compute[233724]: 2025-11-29 08:40:32.109 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:32 np0005539552 nova_compute[233724]: 2025-11-29 08:40:32.110 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:40:32 np0005539552 nova_compute[233724]: 2025-11-29 08:40:32.130 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:40:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:32.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:34.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:34.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:36 np0005539552 podman[312221]: 2025-11-29 08:40:36.026594019 +0000 UTC m=+0.097669350 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:40:36 np0005539552 podman[312222]: 2025-11-29 08:40:36.044396718 +0000 UTC m=+0.115583972 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:40:36 np0005539552 podman[312223]: 2025-11-29 08:40:36.073594064 +0000 UTC m=+0.143188475 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 29 03:40:36 np0005539552 nova_compute[233724]: 2025-11-29 08:40:36.513 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:36.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:37 np0005539552 nova_compute[233724]: 2025-11-29 08:40:37.091 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:38.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3610472005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3610472005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233035083' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233035083' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:40.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:40.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:41 np0005539552 nova_compute[233724]: 2025-11-29 08:40:41.517 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:41 np0005539552 podman[312459]: 2025-11-29 08:40:41.557129866 +0000 UTC m=+0.065979117 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:40:41 np0005539552 podman[312459]: 2025-11-29 08:40:41.724479721 +0000 UTC m=+0.233328912 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:42 np0005539552 nova_compute[233724]: 2025-11-29 08:40:42.094 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:42 np0005539552 podman[312614]: 2025-11-29 08:40:42.52317053 +0000 UTC m=+0.064569979 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:40:42 np0005539552 podman[312614]: 2025-11-29 08:40:42.538007659 +0000 UTC m=+0.079407098 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:40:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:42.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:42 np0005539552 podman[312680]: 2025-11-29 08:40:42.837861321 +0000 UTC m=+0.079523172 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, distribution-scope=public, vcs-type=git, io.openshift.expose-services=)
Nov 29 03:40:42 np0005539552 podman[312680]: 2025-11-29 08:40:42.851383804 +0000 UTC m=+0.093045655 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 03:40:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1119221638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1119221638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:43 np0005539552 nova_compute[233724]: 2025-11-29 08:40:43.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:43 np0005539552 nova_compute[233724]: 2025-11-29 08:40:43.975 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:43 np0005539552 nova_compute[233724]: 2025-11-29 08:40:43.976 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:43 np0005539552 nova_compute[233724]: 2025-11-29 08:40:43.976 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:43 np0005539552 nova_compute[233724]: 2025-11-29 08:40:43.977 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:40:43 np0005539552 nova_compute[233724]: 2025-11-29 08:40:43.977 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:40:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3017473597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.407 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.495 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.496 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.499 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.500 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:44.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.758 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.761 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3748MB free_disk=20.83011245727539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.761 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:44 np0005539552 nova_compute[233724]: 2025-11-29 08:40:44.762 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.007 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance ee3de57a-2652-4819-880a-6217c00a67a0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.007 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance fdf22d9d-2476-4990-b493-ae3ab31a8bb8 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.007 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.008 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.270 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1212784977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.817 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.822 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.836 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.852 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:40:45 np0005539552 nova_compute[233724]: 2025-11-29 08:40:45.852 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:46 np0005539552 nova_compute[233724]: 2025-11-29 08:40:46.520 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:47 np0005539552 nova_compute[233724]: 2025-11-29 08:40:47.097 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:48.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:50.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:50 np0005539552 nova_compute[233724]: 2025-11-29 08:40:50.831 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:50 np0005539552 nova_compute[233724]: 2025-11-29 08:40:50.833 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:40:50 np0005539552 nova_compute[233724]: 2025-11-29 08:40:50.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.523 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.809 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.810 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.811 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.811 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.812 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.814 233728 INFO nova.compute.manager [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Terminating instance#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.816 233728 DEBUG nova.compute.manager [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:40:51 np0005539552 nova_compute[233724]: 2025-11-29 08:40:51.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.099 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 kernel: tap7528e8e3-68 (unregistering): left promiscuous mode
Nov 29 03:40:52 np0005539552 NetworkManager[48926]: <info>  [1764405652.1472] device (tap7528e8e3-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.157 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:52Z|00860|binding|INFO|Releasing lport 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 from this chassis (sb_readonly=0)
Nov 29 03:40:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:52Z|00861|binding|INFO|Setting lport 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 down in Southbound
Nov 29 03:40:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:40:52Z|00862|binding|INFO|Removing iface tap7528e8e3-68 ovn-installed in OVS
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.166 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:bd:c7 10.100.0.27'], port_security=['fa:16:3e:f4:bd:c7 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'fdf22d9d-2476-4990-b493-ae3ab31a8bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02b3859c-c921-45be-8567-c9fda4e094c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7843eb0e-3b68-48d7-b889-5bece517c173, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=7528e8e3-686a-4176-98d8-1f4f2e96b5c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.167 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 7528e8e3-686a-4176-98d8-1f4f2e96b5c0 in datapath f3d6a66c-1acd-4ae3-9639-b6444469c1fc unbound from our chassis#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.168 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3d6a66c-1acd-4ae3-9639-b6444469c1fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.171 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d04c88ca-f5de-43bf-ba71-1b4a51516905]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.171 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc namespace which is not needed anymore#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Nov 29 03:40:52 np0005539552 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bd.scope: Consumed 16.178s CPU time.
Nov 29 03:40:52 np0005539552 systemd-machined[196379]: Machine qemu-87-instance-000000bd terminated.
Nov 29 03:40:52 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [NOTICE]   (312010) : haproxy version is 2.8.14-c23fe91
Nov 29 03:40:52 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [NOTICE]   (312010) : path to executable is /usr/sbin/haproxy
Nov 29 03:40:52 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [WARNING]  (312010) : Exiting Master process...
Nov 29 03:40:52 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [ALERT]    (312010) : Current worker (312012) exited with code 143 (Terminated)
Nov 29 03:40:52 np0005539552 neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc[312006]: [WARNING]  (312010) : All workers exited. Exiting... (0)
Nov 29 03:40:52 np0005539552 systemd[1]: libpod-059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f.scope: Deactivated successfully.
Nov 29 03:40:52 np0005539552 podman[313015]: 2025-11-29 08:40:52.311922247 +0000 UTC m=+0.043171854 container died 059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:40:52 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:40:52 np0005539552 systemd[1]: var-lib-containers-storage-overlay-947c0b9d9c599a0442526ce2e678cd41de4e1494ab55d48db37dfcc8d6d0bec4-merged.mount: Deactivated successfully.
Nov 29 03:40:52 np0005539552 podman[313015]: 2025-11-29 08:40:52.351664286 +0000 UTC m=+0.082913903 container cleanup 059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:40:52 np0005539552 systemd[1]: libpod-conmon-059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f.scope: Deactivated successfully.
Nov 29 03:40:52 np0005539552 podman[313046]: 2025-11-29 08:40:52.437094506 +0000 UTC m=+0.057056657 container remove 059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.443 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b03a4014-cdd4-4bdc-9fe4-9f7b64bcbc56]: (4, ('Sat Nov 29 08:40:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc (059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f)\n059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f\nSat Nov 29 08:40:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc (059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f)\n059c669d308d4378f5c3bf566e235368cd2a771609a64ec1b117c1d95f44f16f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.446 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[66ac6d45-05ac-4ef9-aec6-ad2bcd724d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.447 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3d6a66c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.449 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.467 233728 INFO nova.virt.libvirt.driver [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Instance destroyed successfully.#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.468 233728 DEBUG nova.objects.instance [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid fdf22d9d-2476-4990-b493-ae3ab31a8bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:52 np0005539552 kernel: tapf3d6a66c-10: left promiscuous mode
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.483 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.489 233728 DEBUG nova.virt.libvirt.vif [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:39:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1728799900',display_name='tempest-TestNetworkBasicOps-server-1728799900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1728799900',id=189,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEojQ/ihonNAxFl4nl8U5Nw8RaAwW1wcuM26oNQCtjyh9KayiU3RHToagneSHpUhEsGczqXRW6M+JoXTxDiy7I/4W8BpV9C3MBvNlpVX23JxQejvPKfmFeVLLv8VNJx4Q==',key_name='tempest-TestNetworkBasicOps-1631943157',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:40:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-kqqyxra5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:40:00Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=fdf22d9d-2476-4990-b493-ae3ab31a8bb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.491 233728 DEBUG nova.network.os_vif_util [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "address": "fa:16:3e:f4:bd:c7", "network": {"id": "f3d6a66c-1acd-4ae3-9639-b6444469c1fc", "bridge": "br-int", "label": "tempest-network-smoke--216176959", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7528e8e3-68", "ovs_interfaceid": "7528e8e3-686a-4176-98d8-1f4f2e96b5c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.491 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[07e5f54c-1b06-4ed3-8727-17ef4ccd35c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.493 233728 DEBUG nova.network.os_vif_util [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.494 233728 DEBUG os_vif [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.499 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.500 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7528e8e3-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.502 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.503 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dc306dfc-05d4-4683-bf93-74b6dbfb1b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.505 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2166ebda-143c-4b4b-8174-f853912260d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.511 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.514 233728 INFO os_vif [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:bd:c7,bridge_name='br-int',has_traffic_filtering=True,id=7528e8e3-686a-4176-98d8-1f4f2e96b5c0,network=Network(f3d6a66c-1acd-4ae3-9639-b6444469c1fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7528e8e3-68')#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.529 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5f82eef8-01e8-4cbf-81d5-d588d52efae1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 864369, 'reachable_time': 20378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313074, 'error': None, 'target': 'ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 systemd[1]: run-netns-ovnmeta\x2df3d6a66c\x2d1acd\x2d4ae3\x2d9639\x2db6444469c1fc.mount: Deactivated successfully.
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.533 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3d6a66c-1acd-4ae3-9639-b6444469c1fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:40:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:40:52.533 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[c7058ff3-b812-407c-a055-df91452f3213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:52 np0005539552 nova_compute[233724]: 2025-11-29 08:40:52.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.520 233728 INFO nova.virt.libvirt.driver [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Deleting instance files /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8_del#033[00m
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.521 233728 INFO nova.virt.libvirt.driver [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Deletion of /var/lib/nova/instances/fdf22d9d-2476-4990-b493-ae3ab31a8bb8_del complete#033[00m
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.577 233728 INFO nova.compute.manager [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Took 1.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.578 233728 DEBUG oslo.service.loopingcall [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.578 233728 DEBUG nova.compute.manager [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.579 233728 DEBUG nova.network.neutron [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:40:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:53 np0005539552 nova_compute[233724]: 2025-11-29 08:40:53.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.086 233728 DEBUG nova.compute.manager [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-vif-unplugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.087 233728 DEBUG oslo_concurrency.lockutils [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.088 233728 DEBUG oslo_concurrency.lockutils [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.088 233728 DEBUG oslo_concurrency.lockutils [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.088 233728 DEBUG nova.compute.manager [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] No waiting events found dispatching network-vif-unplugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.089 233728 DEBUG nova.compute.manager [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-vif-unplugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.089 233728 DEBUG nova.compute.manager [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.090 233728 DEBUG oslo_concurrency.lockutils [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.090 233728 DEBUG oslo_concurrency.lockutils [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.090 233728 DEBUG oslo_concurrency.lockutils [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.091 233728 DEBUG nova.compute.manager [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] No waiting events found dispatching network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.091 233728 WARNING nova.compute.manager [req-76c5f976-86a4-436a-a90b-8a45a6d05ea3 req-2744e129-5c45-44d2-9f82-fd4796e293a7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received unexpected event network-vif-plugged-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.248 233728 DEBUG nova.network.neutron [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.269 233728 INFO nova.compute.manager [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Took 0.69 seconds to deallocate network for instance.#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.335 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.336 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.427 233728 DEBUG oslo_concurrency.processutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/728158023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.898 233728 DEBUG oslo_concurrency.processutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.907 233728 DEBUG nova.compute.provider_tree [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:54 np0005539552 nova_compute[233724]: 2025-11-29 08:40:54.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:54.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:55 np0005539552 nova_compute[233724]: 2025-11-29 08:40:55.083 233728 DEBUG nova.scheduler.client.report [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:55 np0005539552 nova_compute[233724]: 2025-11-29 08:40:55.105 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:55 np0005539552 nova_compute[233724]: 2025-11-29 08:40:55.148 233728 INFO nova.scheduler.client.report [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance fdf22d9d-2476-4990-b493-ae3ab31a8bb8#033[00m
Nov 29 03:40:55 np0005539552 nova_compute[233724]: 2025-11-29 08:40:55.206 233728 DEBUG oslo_concurrency.lockutils [None req-b50521c2-7134-4a70-9a54-e6aa3464c103 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "fdf22d9d-2476-4990-b493-ae3ab31a8bb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:56 np0005539552 nova_compute[233724]: 2025-11-29 08:40:56.183 233728 DEBUG nova.compute.manager [req-6056908f-7e9a-49f8-97f7-ec65b0d55ca1 req-a8a22719-90d5-4fac-ac9f-f6e7c2edcb22 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Received event network-vif-deleted-7528e8e3-686a-4176-98d8-1f4f2e96b5c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:57 np0005539552 nova_compute[233724]: 2025-11-29 08:40:57.102 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:57 np0005539552 nova_compute[233724]: 2025-11-29 08:40:57.502 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:58.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:58 np0005539552 nova_compute[233724]: 2025-11-29 08:40:58.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:58 np0005539552 nova_compute[233724]: 2025-11-29 08:40:58.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:40:58 np0005539552 nova_compute[233724]: 2025-11-29 08:40:58.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:40:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:40:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:59 np0005539552 nova_compute[233724]: 2025-11-29 08:40:59.187 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:59 np0005539552 nova_compute[233724]: 2025-11-29 08:40:59.188 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:59 np0005539552 nova_compute[233724]: 2025-11-29 08:40:59.188 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:40:59 np0005539552 nova_compute[233724]: 2025-11-29 08:40:59.188 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:00 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:00Z|00863|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:00Z|00864|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539552 nova_compute[233724]: 2025-11-29 08:41:00.897 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:01 np0005539552 nova_compute[233724]: 2025-11-29 08:41:01.537 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:01 np0005539552 nova_compute[233724]: 2025-11-29 08:41:01.554 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:01 np0005539552 nova_compute[233724]: 2025-11-29 08:41:01.555 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:41:02 np0005539552 nova_compute[233724]: 2025-11-29 08:41:02.105 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539552 nova_compute[233724]: 2025-11-29 08:41:02.503 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:02.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:04.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:06.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:06.894 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:06.895 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:06 np0005539552 nova_compute[233724]: 2025-11-29 08:41:06.895 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:06.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539552 podman[313125]: 2025-11-29 08:41:06.983822325 +0000 UTC m=+0.076634164 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:41:07 np0005539552 podman[313126]: 2025-11-29 08:41:07.002543619 +0000 UTC m=+0.081320390 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:41:07 np0005539552 podman[313130]: 2025-11-29 08:41:07.039399841 +0000 UTC m=+0.112498529 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:41:07 np0005539552 nova_compute[233724]: 2025-11-29 08:41:07.107 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 03:41:07 np0005539552 nova_compute[233724]: 2025-11-29 08:41:07.465 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405652.4642339, fdf22d9d-2476-4990-b493-ae3ab31a8bb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:07 np0005539552 nova_compute[233724]: 2025-11-29 08:41:07.466 233728 INFO nova.compute.manager [-] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:07 np0005539552 nova_compute[233724]: 2025-11-29 08:41:07.492 233728 DEBUG nova.compute.manager [None req-ec40fb7a-d855-41b4-a977-5f37d1381bc9 - - - - - -] [instance: fdf22d9d-2476-4990-b493-ae3ab31a8bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:07 np0005539552 nova_compute[233724]: 2025-11-29 08:41:07.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:08Z|00865|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:08 np0005539552 nova_compute[233724]: 2025-11-29 08:41:08.366 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:08.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:08 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:08.896 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/110723875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/110723875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.186884) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669186969, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1896, "num_deletes": 254, "total_data_size": 4153236, "memory_usage": 4222272, "flush_reason": "Manual Compaction"}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669206503, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 2726750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63692, "largest_seqno": 65583, "table_properties": {"data_size": 2718957, "index_size": 4606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17585, "raw_average_key_size": 20, "raw_value_size": 2702832, "raw_average_value_size": 3202, "num_data_blocks": 200, "num_entries": 844, "num_filter_entries": 844, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405525, "oldest_key_time": 1764405525, "file_creation_time": 1764405669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 19692 microseconds, and 8438 cpu microseconds.
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.206578) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 2726750 bytes OK
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.206597) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.209042) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.209056) EVENT_LOG_v1 {"time_micros": 1764405669209051, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.209070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 4144546, prev total WAL file size 4144546, number of live WAL files 2.
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.210298) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(2662KB)], [129(10MB)]
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669210378, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 13244383, "oldest_snapshot_seqno": -1}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 9491 keys, 11345461 bytes, temperature: kUnknown
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669314513, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11345461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11285527, "index_size": 35117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 250508, "raw_average_key_size": 26, "raw_value_size": 11119863, "raw_average_value_size": 1171, "num_data_blocks": 1332, "num_entries": 9491, "num_filter_entries": 9491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.314869) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11345461 bytes
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.317142) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.0 rd, 108.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 10018, records dropped: 527 output_compression: NoCompression
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.317163) EVENT_LOG_v1 {"time_micros": 1764405669317152, "job": 82, "event": "compaction_finished", "compaction_time_micros": 104274, "compaction_time_cpu_micros": 56352, "output_level": 6, "num_output_files": 1, "total_output_size": 11345461, "num_input_records": 10018, "num_output_records": 9491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669317867, "job": 82, "event": "table_file_deletion", "file_number": 131}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669320141, "job": 82, "event": "table_file_deletion", "file_number": 129}
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.210165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.320243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.320248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.320249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.320251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:41:09.320252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:10 np0005539552 nova_compute[233724]: 2025-11-29 08:41:10.364 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:10 np0005539552 nova_compute[233724]: 2025-11-29 08:41:10.551 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:10.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:10.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2434967663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2434967663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:11 np0005539552 nova_compute[233724]: 2025-11-29 08:41:11.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4105365321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4105365321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:12 np0005539552 nova_compute[233724]: 2025-11-29 08:41:12.110 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:12 np0005539552 nova_compute[233724]: 2025-11-29 08:41:12.505 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:12.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:12.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:14 np0005539552 nova_compute[233724]: 2025-11-29 08:41:14.468 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 03:41:15 np0005539552 nova_compute[233724]: 2025-11-29 08:41:15.604 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:16.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:17 np0005539552 nova_compute[233724]: 2025-11-29 08:41:17.112 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:17 np0005539552 nova_compute[233724]: 2025-11-29 08:41:17.508 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:18.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:18.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:20 np0005539552 nova_compute[233724]: 2025-11-29 08:41:20.139 233728 DEBUG nova.compute.manager [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:20 np0005539552 nova_compute[233724]: 2025-11-29 08:41:20.139 233728 DEBUG nova.compute.manager [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing instance network info cache due to event network-changed-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:20 np0005539552 nova_compute[233724]: 2025-11-29 08:41:20.139 233728 DEBUG oslo_concurrency.lockutils [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:20 np0005539552 nova_compute[233724]: 2025-11-29 08:41:20.139 233728 DEBUG oslo_concurrency.lockutils [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:20 np0005539552 nova_compute[233724]: 2025-11-29 08:41:20.139 233728 DEBUG nova.network.neutron [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Refreshing network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:20.648 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:20.648 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4157368880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4157368880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:20.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:21 np0005539552 nova_compute[233724]: 2025-11-29 08:41:21.663 233728 DEBUG nova.network.neutron [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updated VIF entry in instance network info cache for port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:21 np0005539552 nova_compute[233724]: 2025-11-29 08:41:21.664 233728 DEBUG nova.network.neutron [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [{"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:21 np0005539552 nova_compute[233724]: 2025-11-29 08:41:21.690 233728 DEBUG oslo_concurrency.lockutils [req-50b4c606-40ac-4a5e-a9f6-067840d30eec req-ac182867-d075-4ef0-a7c9-96aa1235532a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-ee3de57a-2652-4819-880a-6217c00a67a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:22 np0005539552 nova_compute[233724]: 2025-11-29 08:41:22.115 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:22 np0005539552 nova_compute[233724]: 2025-11-29 08:41:22.510 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:22.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:22.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.297 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.297 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.317 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.418 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.419 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.426 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.427 233728 INFO nova.compute.claims [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:41:23 np0005539552 nova_compute[233724]: 2025-11-29 08:41:23.592 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1796958856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.066 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.076 233728 DEBUG nova.compute.provider_tree [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.100 233728 DEBUG nova.scheduler.client.report [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.125 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.126 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.182 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.183 233728 DEBUG nova.network.neutron [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.208 233728 INFO nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.226 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.304 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.305 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.306 233728 INFO nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Creating image(s)#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.332 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.354 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.378 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.382 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.450 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.451 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.451 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.451 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.472 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.476 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 dfa97913-59ab-4f13-87e6-48d356827763_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:24 np0005539552 nova_compute[233724]: 2025-11-29 08:41:24.510 233728 DEBUG nova.policy [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:41:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:24.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:24.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.142 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 dfa97913-59ab-4f13-87e6-48d356827763_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.240 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.349 233728 DEBUG nova.objects.instance [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid dfa97913-59ab-4f13-87e6-48d356827763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.367 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.368 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Ensure instance console log exists: /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.368 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.369 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.369 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.712 233728 DEBUG nova.network.neutron [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Successfully updated port: a02576c9-0b5e-4f90-83fd-ced6081e0046 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.727 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.728 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.728 233728 DEBUG nova.network.neutron [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:41:25 np0005539552 nova_compute[233724]: 2025-11-29 08:41:25.922 233728 DEBUG nova.network.neutron [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:41:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.721 233728 DEBUG nova.network.neutron [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Updating instance_info_cache with network_info: [{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.772 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.772 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Instance network_info: |[{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.777 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Start _get_guest_xml network_info=[{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.783 233728 WARNING nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.788 233728 DEBUG nova.virt.libvirt.host [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.789 233728 DEBUG nova.virt.libvirt.host [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.792 233728 DEBUG nova.virt.libvirt.host [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.793 233728 DEBUG nova.virt.libvirt.host [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.795 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.796 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.797 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.797 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.798 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.798 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.798 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.799 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.799 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.800 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.800 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.801 233728 DEBUG nova.virt.hardware [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:41:26 np0005539552 nova_compute[233724]: 2025-11-29 08:41:26.807 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2922742563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.274 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.310 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.316 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3582724446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.770 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.773 233728 DEBUG nova.virt.libvirt.vif [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1437681169',display_name='tempest-TestNetworkBasicOps-server-1437681169',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1437681169',id=192,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN7dPms6GIG8K510MmN+uQbQqys2LGFdaegOlkWQqXJwICLLAMADam1h9Eh+7WSIz2rDMkfDbwB9UmpA6qe7juDgKenICfv1p3HLiaVP2FG7rOmOw5ZyFOEjIR8I1gRGaA==',key_name='tempest-TestNetworkBasicOps-1516534575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-7xne95tw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:24Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=dfa97913-59ab-4f13-87e6-48d356827763,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.774 233728 DEBUG nova.network.os_vif_util [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.775 233728 DEBUG nova.network.os_vif_util [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.777 233728 DEBUG nova.objects.instance [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfa97913-59ab-4f13-87e6-48d356827763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.800 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <uuid>dfa97913-59ab-4f13-87e6-48d356827763</uuid>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <name>instance-000000c0</name>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkBasicOps-server-1437681169</nova:name>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:41:26</nova:creationTime>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <nova:port uuid="a02576c9-0b5e-4f90-83fd-ced6081e0046">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <entry name="serial">dfa97913-59ab-4f13-87e6-48d356827763</entry>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <entry name="uuid">dfa97913-59ab-4f13-87e6-48d356827763</entry>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/dfa97913-59ab-4f13-87e6-48d356827763_disk">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/dfa97913-59ab-4f13-87e6-48d356827763_disk.config">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:9b:ac:79"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <target dev="tapa02576c9-0b"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/console.log" append="off"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:41:27 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:41:27 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:41:27 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:41:27 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.802 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Preparing to wait for external event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.802 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.803 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.803 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.805 233728 DEBUG nova.virt.libvirt.vif [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1437681169',display_name='tempest-TestNetworkBasicOps-server-1437681169',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1437681169',id=192,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN7dPms6GIG8K510MmN+uQbQqys2LGFdaegOlkWQqXJwICLLAMADam1h9Eh+7WSIz2rDMkfDbwB9UmpA6qe7juDgKenICfv1p3HLiaVP2FG7rOmOw5ZyFOEjIR8I1gRGaA==',key_name='tempest-TestNetworkBasicOps-1516534575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-7xne95tw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:24Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=dfa97913-59ab-4f13-87e6-48d356827763,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.805 233728 DEBUG nova.network.os_vif_util [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.806 233728 DEBUG nova.network.os_vif_util [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.807 233728 DEBUG os_vif [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.808 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.809 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.810 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.814 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.815 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02576c9-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.815 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa02576c9-0b, col_values=(('external_ids', {'iface-id': 'a02576c9-0b5e-4f90-83fd-ced6081e0046', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:ac:79', 'vm-uuid': 'dfa97913-59ab-4f13-87e6-48d356827763'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:27 np0005539552 NetworkManager[48926]: <info>  [1764405687.8188] manager: (tapa02576c9-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.822 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.827 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.828 233728 INFO os_vif [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b')#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.835 233728 DEBUG nova.compute.manager [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-changed-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.835 233728 DEBUG nova.compute.manager [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Refreshing instance network info cache due to event network-changed-a02576c9-0b5e-4f90-83fd-ced6081e0046. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.836 233728 DEBUG oslo_concurrency.lockutils [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.836 233728 DEBUG oslo_concurrency.lockutils [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.837 233728 DEBUG nova.network.neutron [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Refreshing network info cache for port a02576c9-0b5e-4f90-83fd-ced6081e0046 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.897 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.898 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.898 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:9b:ac:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.899 233728 INFO nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Using config drive#033[00m
Nov 29 03:41:27 np0005539552 nova_compute[233724]: 2025-11-29 08:41:27.926 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.049 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.166 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.167 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.167 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.168 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.169 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.171 233728 INFO nova.compute.manager [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Terminating instance#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.173 233728 DEBUG nova.compute.manager [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:28 np0005539552 kernel: tapdfb097a1-c8 (unregistering): left promiscuous mode
Nov 29 03:41:28 np0005539552 NetworkManager[48926]: <info>  [1764405688.2480] device (tapdfb097a1-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.257 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:28Z|00866|binding|INFO|Releasing lport dfb097a1-c82e-41b6-9a9f-57a7771a6e0a from this chassis (sb_readonly=0)
Nov 29 03:41:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:28Z|00867|binding|INFO|Setting lport dfb097a1-c82e-41b6-9a9f-57a7771a6e0a down in Southbound
Nov 29 03:41:28 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:28Z|00868|binding|INFO|Removing iface tapdfb097a1-c8 ovn-installed in OVS
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.261 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.266 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:a6:18 10.100.0.9'], port_security=['fa:16:3e:0b:a6:18 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ee3de57a-2652-4819-880a-6217c00a67a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c114bc23-cd62-4198-a95d-5595953a88bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9ca74358-0566-4f32-a6ba-a0c4dcd1723c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cd3a0f0-9ad7-457d-b2e3-d5300cfee042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.268 143400 INFO neutron.agent.ovn.metadata.agent [-] Port dfb097a1-c82e-41b6-9a9f-57a7771a6e0a in datapath c114bc23-cd62-4198-a95d-5595953a88bd unbound from our chassis#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.271 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c114bc23-cd62-4198-a95d-5595953a88bd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.275 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7c9d3b-d16f-4379-9464-88e98ad333fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.276 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd namespace which is not needed anymore#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.291 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Nov 29 03:41:28 np0005539552 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000ba.scope: Consumed 21.736s CPU time.
Nov 29 03:41:28 np0005539552 systemd-machined[196379]: Machine qemu-86-instance-000000ba terminated.
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.427 233728 INFO nova.virt.libvirt.driver [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Instance destroyed successfully.#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.429 233728 DEBUG nova.objects.instance [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'resources' on Instance uuid ee3de57a-2652-4819-880a-6217c00a67a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.447 233728 DEBUG nova.virt.libvirt.vif [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:39:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-94439515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-94439515',id=186,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-nd3eyux6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:30Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=ee3de57a-2652-4819-880a-6217c00a67a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.448 233728 DEBUG nova.network.os_vif_util [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "address": "fa:16:3e:0b:a6:18", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfb097a1-c8", "ovs_interfaceid": "dfb097a1-c82e-41b6-9a9f-57a7771a6e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.449 233728 DEBUG nova.network.os_vif_util [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.450 233728 DEBUG os_vif [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.453 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfb097a1-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.455 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.469 233728 INFO nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Creating config drive at /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/disk.config#033[00m
Nov 29 03:41:28 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [NOTICE]   (311113) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:28 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [NOTICE]   (311113) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:28 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [WARNING]  (311113) : Exiting Master process...
Nov 29 03:41:28 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [WARNING]  (311113) : Exiting Master process...
Nov 29 03:41:28 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [ALERT]    (311113) : Current worker (311115) exited with code 143 (Terminated)
Nov 29 03:41:28 np0005539552 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[311109]: [WARNING]  (311113) : All workers exited. Exiting... (0)
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.476 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rue391r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:28 np0005539552 systemd[1]: libpod-ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1.scope: Deactivated successfully.
Nov 29 03:41:28 np0005539552 podman[313546]: 2025-11-29 08:41:28.483079429 +0000 UTC m=+0.068977557 container died ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:28 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.526 233728 INFO os_vif [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:a6:18,bridge_name='br-int',has_traffic_filtering=True,id=dfb097a1-c82e-41b6-9a9f-57a7771a6e0a,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfb097a1-c8')#033[00m
Nov 29 03:41:28 np0005539552 systemd[1]: var-lib-containers-storage-overlay-932134c72d55dc0ad244bc599f254238fa1755fb24f7c64026dc5bc15b5756bf-merged.mount: Deactivated successfully.
Nov 29 03:41:28 np0005539552 podman[313546]: 2025-11-29 08:41:28.549629511 +0000 UTC m=+0.135527639 container cleanup ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:41:28 np0005539552 systemd[1]: libpod-conmon-ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1.scope: Deactivated successfully.
Nov 29 03:41:28 np0005539552 podman[313629]: 2025-11-29 08:41:28.637460795 +0000 UTC m=+0.054127648 container remove ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.637 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rue391r" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.644 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[98c3477f-fdcb-4099-8c08-63d63036324e]: (4, ('Sat Nov 29 08:41:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd (ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1)\ned9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1\nSat Nov 29 08:41:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd (ed9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1)\ned9afb0afe3957ee4e4ebf7a6c57290890bea4a8af78facf5b1bf04cdf399da1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.646 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cab0b983-15b6-4926-a6fb-cde1cb5dc977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.647 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc114bc23-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:28 np0005539552 kernel: tapc114bc23-c0: left promiscuous mode
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.676 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b50f450-0379-4f4d-b7ef-efd68b16b29f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.686 233728 DEBUG nova.storage.rbd_utils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image dfa97913-59ab-4f13-87e6-48d356827763_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.693 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/disk.config dfa97913-59ab-4f13-87e6-48d356827763_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.693 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[13a72912-bbca-40f5-92bf-c5a0c4c16958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.695 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d33d02-b73f-4366-8f4b-dcc2e6431bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.711 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[40662815-8a98-4a0c-89ca-b58e8651a873]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861467, 'reachable_time': 22380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313695, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 systemd[1]: run-netns-ovnmeta\x2dc114bc23\x2dcd62\x2d4198\x2da95d\x2d5595953a88bd.mount: Deactivated successfully.
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.715 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:28 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:28.715 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3e742f-cb88-4a57-9efa-42aa2764fa96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.742 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.913 233728 DEBUG oslo_concurrency.processutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/disk.config dfa97913-59ab-4f13-87e6-48d356827763_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.915 233728 INFO nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Deleting local config drive /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763/disk.config because it was imported into RBD.#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.949 233728 INFO nova.virt.libvirt.driver [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Deleting instance files /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0_del#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.950 233728 INFO nova.virt.libvirt.driver [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Deletion of /var/lib/nova/instances/ee3de57a-2652-4819-880a-6217c00a67a0_del complete#033[00m
Nov 29 03:41:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.994 233728 INFO nova.compute.manager [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.994 233728 DEBUG oslo.service.loopingcall [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.995 233728 DEBUG nova.compute.manager [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:28 np0005539552 nova_compute[233724]: 2025-11-29 08:41:28.995 233728 DEBUG nova.network.neutron [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:28 np0005539552 kernel: tapa02576c9-0b: entered promiscuous mode
Nov 29 03:41:29 np0005539552 NetworkManager[48926]: <info>  [1764405689.0021] manager: (tapa02576c9-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539552 systemd-udevd[313526]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:29Z|00869|binding|INFO|Claiming lport a02576c9-0b5e-4f90-83fd-ced6081e0046 for this chassis.
Nov 29 03:41:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:29Z|00870|binding|INFO|a02576c9-0b5e-4f90-83fd-ced6081e0046: Claiming fa:16:3e:9b:ac:79 10.100.0.9
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.008 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ac:79 10.100.0.9'], port_security=['fa:16:3e:9b:ac:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dfa97913-59ab-4f13-87e6-48d356827763', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18d68586-48a1-4a55-b48c-8e6a0493b378', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce74417-6704-4d65-b5df-ee9ba219986e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a02576c9-0b5e-4f90-83fd-ced6081e0046) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.009 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a02576c9-0b5e-4f90-83fd-ced6081e0046 in datapath 18d68586-48a1-4a55-b48c-8e6a0493b378 bound to our chassis#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.010 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18d68586-48a1-4a55-b48c-8e6a0493b378#033[00m
Nov 29 03:41:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:29Z|00871|binding|INFO|Setting lport a02576c9-0b5e-4f90-83fd-ced6081e0046 ovn-installed in OVS
Nov 29 03:41:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:29Z|00872|binding|INFO|Setting lport a02576c9-0b5e-4f90-83fd-ced6081e0046 up in Southbound
Nov 29 03:41:29 np0005539552 NetworkManager[48926]: <info>  [1764405689.0268] device (tapa02576c9-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:41:29 np0005539552 NetworkManager[48926]: <info>  [1764405689.0282] device (tapa02576c9-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.029 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4722bf51-5d4b-4403-b6b1-88792f6e4032]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.029 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18d68586-41 in ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.031 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18d68586-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.031 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[445baa7d-09ba-4e67-936f-e7e276c32ef4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.032 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[52f91908-4294-4760-aca0-9483119f5d47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.047 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed9cf3c-89b9-4057-a1a5-8cafbe31f5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 systemd-machined[196379]: New machine qemu-88-instance-000000c0.
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.062 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[55c3d805-4d95-4178-95dc-aa880651d7ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 systemd[1]: Started Virtual Machine qemu-88-instance-000000c0.
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.100 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9e13e5bd-d00f-4b66-b074-6bc01b4b356b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.107 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e68f904d-4eed-4d26-ba1e-80bb85a5bcab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 NetworkManager[48926]: <info>  [1764405689.1105] manager: (tap18d68586-40): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.152 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d6748d77-93ce-4c6a-9ddd-29e2cc95e4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.155 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[3b177205-63fc-4c95-b971-e7773485d303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 NetworkManager[48926]: <info>  [1764405689.1922] device (tap18d68586-40): carrier: link connected
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.198 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc7d416-ccf8-421b-a661-6723c345ae1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.221 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d7208ee8-1c23-4717-98c5-4c0d4ec84739]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18d68586-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:86:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873488, 'reachable_time': 36491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313761, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.246 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eff2f96d-87e9-4fe1-9f43-9d7f5863dc83]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:8668'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873488, 'tstamp': 873488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313762, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.270 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c0feb2a6-e3e7-4d32-bf92-bc76f508141c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18d68586-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:86:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873488, 'reachable_time': 36491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313763, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.316 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[457ed542-6d7b-44d9-9d77-5526f8ae627c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.422 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[532cb8f9-f85c-40ad-b00e-e3f393dd34e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.424 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18d68586-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.424 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.425 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18d68586-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.428 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539552 NetworkManager[48926]: <info>  [1764405689.4292] manager: (tap18d68586-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Nov 29 03:41:29 np0005539552 kernel: tap18d68586-40: entered promiscuous mode
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.433 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18d68586-40, col_values=(('external_ids', {'iface-id': 'd5416c31-8c19-445b-b33f-f0e9a55b063e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:29 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:29Z|00873|binding|INFO|Releasing lport d5416c31-8c19-445b-b33f-f0e9a55b063e from this chassis (sb_readonly=0)
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.434 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.461 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.463 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18d68586-48a1-4a55-b48c-8e6a0493b378.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18d68586-48a1-4a55-b48c-8e6a0493b378.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.464 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[df96be95-6519-45c3-b7df-18016faa7245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.466 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-18d68586-48a1-4a55-b48c-8e6a0493b378
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/18d68586-48a1-4a55-b48c-8e6a0493b378.pid.haproxy
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 18d68586-48a1-4a55-b48c-8e6a0493b378
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:41:29 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:29.467 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'env', 'PROCESS_TAG=haproxy-18d68586-48a1-4a55-b48c-8e6a0493b378', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18d68586-48a1-4a55-b48c-8e6a0493b378.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.653 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405689.653194, dfa97913-59ab-4f13-87e6-48d356827763 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.654 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] VM Started (Lifecycle Event)#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.672 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.676 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405689.6540008, dfa97913-59ab-4f13-87e6-48d356827763 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.676 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.693 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.696 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.716 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:29 np0005539552 podman[313835]: 2025-11-29 08:41:29.887136823 +0000 UTC m=+0.050390218 container create dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.931 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-vif-unplugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.931 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.932 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:29 np0005539552 systemd[1]: Started libpod-conmon-dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59.scope.
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.932 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.933 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] No waiting events found dispatching network-vif-unplugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.934 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-vif-unplugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.934 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.934 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.935 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.935 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.936 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] No waiting events found dispatching network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.936 233728 WARNING nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received unexpected event network-vif-plugged-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.936 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.936 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.937 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.937 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.937 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Processing event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.938 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.938 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.938 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.938 233728 DEBUG oslo_concurrency.lockutils [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.939 233728 DEBUG nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] No waiting events found dispatching network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.939 233728 WARNING nova.compute.manager [req-68f688bf-d90c-48a6-b503-e7f8957d2da2 req-b5f44e45-adb9-4b21-b6af-b1522bd0b6c6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received unexpected event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.940 233728 DEBUG nova.network.neutron [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Updated VIF entry in instance network info cache for port a02576c9-0b5e-4f90-83fd-ced6081e0046. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.940 233728 DEBUG nova.network.neutron [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Updating instance_info_cache with network_info: [{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.942 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.947 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405689.9468176, dfa97913-59ab-4f13-87e6-48d356827763 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.948 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.950 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:41:29 np0005539552 podman[313835]: 2025-11-29 08:41:29.860510206 +0000 UTC m=+0.023763621 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.954 233728 INFO nova.virt.libvirt.driver [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Instance spawned successfully.#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.954 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.970 233728 DEBUG oslo_concurrency.lockutils [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.972 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.979 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.982 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.983 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.983 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.984 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.985 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:29 np0005539552 nova_compute[233724]: 2025-11-29 08:41:29.985 233728 DEBUG nova.virt.libvirt.driver [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:29 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:41:29 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1df1eaf36c8e7c16e154e3d08d3852f30d0b1db2e1e6c73b98bc85573e4079/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:30 np0005539552 podman[313835]: 2025-11-29 08:41:30.006601838 +0000 UTC m=+0.169855283 container init dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:30 np0005539552 podman[313835]: 2025-11-29 08:41:30.013853864 +0000 UTC m=+0.177107259 container start dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.015 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:30 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [NOTICE]   (313854) : New worker (313856) forked
Nov 29 03:41:30 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [NOTICE]   (313854) : Loading success.
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.059 233728 INFO nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Took 5.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.060 233728 DEBUG nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.140 233728 INFO nova.compute.manager [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Took 6.74 seconds to build instance.#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.161 233728 DEBUG oslo_concurrency.lockutils [None req-f097dc88-1306-4526-a2d5-ad16bc3b3c24 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.456 233728 DEBUG nova.network.neutron [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.486 233728 INFO nova.compute.manager [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Took 1.49 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.567 233728 DEBUG nova.compute.manager [req-714957b6-18d2-4744-8c30-26f7f18b4f81 req-348f28b6-5daf-421b-a5c0-0546292200f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Received event network-vif-deleted-dfb097a1-c82e-41b6-9a9f-57a7771a6e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.666 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.717 233728 INFO nova.compute.manager [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.761 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.763 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:30 np0005539552 nova_compute[233724]: 2025-11-29 08:41:30.839 233728 DEBUG oslo_concurrency.processutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/543078166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:31 np0005539552 nova_compute[233724]: 2025-11-29 08:41:31.304 233728 DEBUG oslo_concurrency.processutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:31 np0005539552 nova_compute[233724]: 2025-11-29 08:41:31.310 233728 DEBUG nova.compute.provider_tree [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:31 np0005539552 nova_compute[233724]: 2025-11-29 08:41:31.325 233728 DEBUG nova.scheduler.client.report [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:31 np0005539552 nova_compute[233724]: 2025-11-29 08:41:31.344 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:31 np0005539552 nova_compute[233724]: 2025-11-29 08:41:31.400 233728 INFO nova.scheduler.client.report [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Deleted allocations for instance ee3de57a-2652-4819-880a-6217c00a67a0#033[00m
Nov 29 03:41:31 np0005539552 nova_compute[233724]: 2025-11-29 08:41:31.464 233728 DEBUG oslo_concurrency.lockutils [None req-3122e16c-af6f-4c2a-8c31-91c013c58525 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "ee3de57a-2652-4819-880a-6217c00a67a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:32 np0005539552 nova_compute[233724]: 2025-11-29 08:41:32.120 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785865185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785865185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:33 np0005539552 nova_compute[233724]: 2025-11-29 08:41:33.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616035418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616035418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:33 np0005539552 nova_compute[233724]: 2025-11-29 08:41:33.923 233728 DEBUG nova.compute.manager [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-changed-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:33 np0005539552 nova_compute[233724]: 2025-11-29 08:41:33.924 233728 DEBUG nova.compute.manager [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Refreshing instance network info cache due to event network-changed-a02576c9-0b5e-4f90-83fd-ced6081e0046. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:33 np0005539552 nova_compute[233724]: 2025-11-29 08:41:33.924 233728 DEBUG oslo_concurrency.lockutils [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:33 np0005539552 nova_compute[233724]: 2025-11-29 08:41:33.925 233728 DEBUG oslo_concurrency.lockutils [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:33 np0005539552 nova_compute[233724]: 2025-11-29 08:41:33.925 233728 DEBUG nova.network.neutron [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Refreshing network info cache for port a02576c9-0b5e-4f90-83fd-ced6081e0046 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.114 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.115 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.116 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.116 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.117 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.119 233728 INFO nova.compute.manager [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Terminating instance#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.120 233728 DEBUG nova.compute.manager [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:34 np0005539552 kernel: tapa02576c9-0b (unregistering): left promiscuous mode
Nov 29 03:41:34 np0005539552 NetworkManager[48926]: <info>  [1764405694.1661] device (tapa02576c9-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.184 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:34Z|00874|binding|INFO|Releasing lport a02576c9-0b5e-4f90-83fd-ced6081e0046 from this chassis (sb_readonly=0)
Nov 29 03:41:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:34Z|00875|binding|INFO|Setting lport a02576c9-0b5e-4f90-83fd-ced6081e0046 down in Southbound
Nov 29 03:41:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:34Z|00876|binding|INFO|Removing iface tapa02576c9-0b ovn-installed in OVS
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.191 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ac:79 10.100.0.9'], port_security=['fa:16:3e:9b:ac:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dfa97913-59ab-4f13-87e6-48d356827763', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18d68586-48a1-4a55-b48c-8e6a0493b378', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce74417-6704-4d65-b5df-ee9ba219986e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a02576c9-0b5e-4f90-83fd-ced6081e0046) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.193 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a02576c9-0b5e-4f90-83fd-ced6081e0046 in datapath 18d68586-48a1-4a55-b48c-8e6a0493b378 unbound from our chassis#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.195 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18d68586-48a1-4a55-b48c-8e6a0493b378, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.196 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4651fa-436a-4b7a-8419-f5df2dd4b407]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.196 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 namespace which is not needed anymore#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.204 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Nov 29 03:41:34 np0005539552 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c0.scope: Consumed 4.925s CPU time.
Nov 29 03:41:34 np0005539552 systemd-machined[196379]: Machine qemu-88-instance-000000c0 terminated.
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.349 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.354 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [NOTICE]   (313854) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:34 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [NOTICE]   (313854) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:34 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [WARNING]  (313854) : Exiting Master process...
Nov 29 03:41:34 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [WARNING]  (313854) : Exiting Master process...
Nov 29 03:41:34 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [ALERT]    (313854) : Current worker (313856) exited with code 143 (Terminated)
Nov 29 03:41:34 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[313850]: [WARNING]  (313854) : All workers exited. Exiting... (0)
Nov 29 03:41:34 np0005539552 systemd[1]: libpod-dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59.scope: Deactivated successfully.
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.369 233728 INFO nova.virt.libvirt.driver [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Instance destroyed successfully.#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.369 233728 DEBUG nova.objects.instance [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid dfa97913-59ab-4f13-87e6-48d356827763 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:34 np0005539552 podman[313914]: 2025-11-29 08:41:34.373180996 +0000 UTC m=+0.054684203 container died dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.386 233728 DEBUG nova.virt.libvirt.vif [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:41:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1437681169',display_name='tempest-TestNetworkBasicOps-server-1437681169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1437681169',id=192,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN7dPms6GIG8K510MmN+uQbQqys2LGFdaegOlkWQqXJwICLLAMADam1h9Eh+7WSIz2rDMkfDbwB9UmpA6qe7juDgKenICfv1p3HLiaVP2FG7rOmOw5ZyFOEjIR8I1gRGaA==',key_name='tempest-TestNetworkBasicOps-1516534575',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-7xne95tw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:41:30Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=dfa97913-59ab-4f13-87e6-48d356827763,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.387 233728 DEBUG nova.network.os_vif_util [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.388 233728 DEBUG nova.network.os_vif_util [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.389 233728 DEBUG os_vif [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.391 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02576c9-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.393 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.395 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.399 233728 INFO os_vif [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b')#033[00m
Nov 29 03:41:34 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:34 np0005539552 systemd[1]: var-lib-containers-storage-overlay-3f1df1eaf36c8e7c16e154e3d08d3852f30d0b1db2e1e6c73b98bc85573e4079-merged.mount: Deactivated successfully.
Nov 29 03:41:34 np0005539552 podman[313914]: 2025-11-29 08:41:34.418394283 +0000 UTC m=+0.099897490 container cleanup dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:41:34 np0005539552 systemd[1]: libpod-conmon-dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59.scope: Deactivated successfully.
Nov 29 03:41:34 np0005539552 podman[313966]: 2025-11-29 08:41:34.509901876 +0000 UTC m=+0.063797328 container remove dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.517 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eaff8813-f747-4fab-8f92-b1e556dd98fe]: (4, ('Sat Nov 29 08:41:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 (dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59)\ndc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59\nSat Nov 29 08:41:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 (dc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59)\ndc7a102c6e42965c024f30d686354c0ed3ef58bbe5b391d69ccf81fbd80edf59\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.520 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a218ed08-ab1d-4738-ade2-cd6356bfb3c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.521 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18d68586-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.525 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 kernel: tap18d68586-40: left promiscuous mode
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.562 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0c4a36-46e8-4e75-bbf5-3b78f9eea5ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.579 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1f82ceb5-7fa3-4b4c-8112-570423833d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.580 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[56312632-d472-4a6e-b96f-bf84f5ab4259]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.603 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a574097d-0887-4ec2-ae99-b42fb6e90039]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873478, 'reachable_time': 40014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313987, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.605 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:34 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:34.606 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[06c4ba4b-0945-4101-bb93-993b1ab083a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:34 np0005539552 systemd[1]: run-netns-ovnmeta\x2d18d68586\x2d48a1\x2d4a55\x2db48c\x2d8e6a0493b378.mount: Deactivated successfully.
Nov 29 03:41:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.848 233728 INFO nova.virt.libvirt.driver [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Deleting instance files /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763_del#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.849 233728 INFO nova.virt.libvirt.driver [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Deletion of /var/lib/nova/instances/dfa97913-59ab-4f13-87e6-48d356827763_del complete#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.968 233728 INFO nova.compute.manager [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.969 233728 DEBUG oslo.service.loopingcall [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.969 233728 DEBUG nova.compute.manager [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:34 np0005539552 nova_compute[233724]: 2025-11-29 08:41:34.969 233728 DEBUG nova.network.neutron [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:35 np0005539552 nova_compute[233724]: 2025-11-29 08:41:35.260 233728 DEBUG nova.network.neutron [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Updated VIF entry in instance network info cache for port a02576c9-0b5e-4f90-83fd-ced6081e0046. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:35 np0005539552 nova_compute[233724]: 2025-11-29 08:41:35.261 233728 DEBUG nova.network.neutron [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Updating instance_info_cache with network_info: [{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:35 np0005539552 nova_compute[233724]: 2025-11-29 08:41:35.290 233728 DEBUG oslo_concurrency.lockutils [req-3d1c797c-246c-48cd-a022-7de9abdce90c req-147c705d-1ca9-4cb1-87bd-cf3ad3342895 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-dfa97913-59ab-4f13-87e6-48d356827763" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.054 233728 DEBUG nova.compute.manager [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-vif-unplugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.055 233728 DEBUG oslo_concurrency.lockutils [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.056 233728 DEBUG oslo_concurrency.lockutils [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.056 233728 DEBUG oslo_concurrency.lockutils [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.057 233728 DEBUG nova.compute.manager [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] No waiting events found dispatching network-vif-unplugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.057 233728 DEBUG nova.compute.manager [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-vif-unplugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.057 233728 DEBUG nova.compute.manager [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.058 233728 DEBUG oslo_concurrency.lockutils [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "dfa97913-59ab-4f13-87e6-48d356827763-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.058 233728 DEBUG oslo_concurrency.lockutils [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.059 233728 DEBUG oslo_concurrency.lockutils [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.059 233728 DEBUG nova.compute.manager [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] No waiting events found dispatching network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.060 233728 WARNING nova.compute.manager [req-eb9d6fce-0012-43c8-a267-6d464897bdd5 req-26ef3288-777c-4ee4-a810-fc9fb2f08308 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Received unexpected event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.163 233728 DEBUG nova.network.neutron [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.177 233728 INFO nova.compute.manager [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.220 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.221 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.261 233728 DEBUG oslo_concurrency.processutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/295015146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.779 233728 DEBUG oslo_concurrency.processutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.789 233728 DEBUG nova.compute.provider_tree [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.809 233728 DEBUG nova.scheduler.client.report [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.844 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.869 233728 INFO nova.scheduler.client.report [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance dfa97913-59ab-4f13-87e6-48d356827763#033[00m
Nov 29 03:41:36 np0005539552 nova_compute[233724]: 2025-11-29 08:41:36.932 233728 DEBUG oslo_concurrency.lockutils [None req-7fe4cba7-39a3-4c58-941b-ececacbac9a8 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "dfa97913-59ab-4f13-87e6-48d356827763" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:36.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:37 np0005539552 nova_compute[233724]: 2025-11-29 08:41:37.122 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 03:41:37 np0005539552 podman[314016]: 2025-11-29 08:41:37.985701026 +0000 UTC m=+0.072954715 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 03:41:37 np0005539552 podman[314015]: 2025-11-29 08:41:37.990025502 +0000 UTC m=+0.076301055 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:41:38 np0005539552 podman[314017]: 2025-11-29 08:41:38.051515547 +0000 UTC m=+0.124698527 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:38 np0005539552 nova_compute[233724]: 2025-11-29 08:41:38.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:38.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986380162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986380162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:39 np0005539552 nova_compute[233724]: 2025-11-29 08:41:39.394 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:41.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:41 np0005539552 nova_compute[233724]: 2025-11-29 08:41:41.811 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539552 nova_compute[233724]: 2025-11-29 08:41:42.179 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539552 nova_compute[233724]: 2025-11-29 08:41:42.198 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:43.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.405 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.406 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.424 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405688.4231048, ee3de57a-2652-4819-880a-6217c00a67a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.425 233728 INFO nova.compute.manager [-] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.436 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.483 233728 DEBUG nova.compute.manager [None req-b0cbe789-55ed-435c-a543-58b8941fb42a - - - - - -] [instance: ee3de57a-2652-4819-880a-6217c00a67a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.549 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.550 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.556 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.557 233728 INFO nova.compute.claims [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.662 233728 DEBUG nova.scheduler.client.report [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.697 233728 DEBUG nova.scheduler.client.report [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.697 233728 DEBUG nova.compute.provider_tree [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:41:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.720 233728 DEBUG nova.scheduler.client.report [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.756 233728 DEBUG nova.scheduler.client.report [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:41:43 np0005539552 nova_compute[233724]: 2025-11-29 08:41:43.797 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1877248172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.245 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.253 233728 DEBUG nova.compute.provider_tree [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.291 233728 DEBUG nova.scheduler.client.report [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.319 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.320 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.383 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.383 233728 DEBUG nova.network.neutron [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.397 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.405 233728 INFO nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.421 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.517 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.518 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.519 233728 INFO nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Creating image(s)#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.548 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.577 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.610 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.618 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.708 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.710 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.711 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.711 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:44.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.756 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.763 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.969 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.970 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.970 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.970 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:41:44 np0005539552 nova_compute[233724]: 2025-11-29 08:41:44.970 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:45.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.085 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.166 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.268 233728 DEBUG nova.policy [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.278 233728 DEBUG nova.objects.instance [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.309 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.310 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Ensure instance console log exists: /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.311 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.311 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.312 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2731129497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.432 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.618 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.619 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4172MB free_disk=20.897335052490234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.619 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.620 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.673 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.674 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.674 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:41:45 np0005539552 nova_compute[233724]: 2025-11-29 08:41:45.708 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1058972033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:46 np0005539552 nova_compute[233724]: 2025-11-29 08:41:46.172 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:46 np0005539552 nova_compute[233724]: 2025-11-29 08:41:46.180 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:46 np0005539552 nova_compute[233724]: 2025-11-29 08:41:46.195 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:46 np0005539552 nova_compute[233724]: 2025-11-29 08:41:46.218 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:41:46 np0005539552 nova_compute[233724]: 2025-11-29 08:41:46.218 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:46.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.183 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.191 233728 DEBUG nova.network.neutron [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Successfully updated port: a02576c9-0b5e-4f90-83fd-ced6081e0046 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.210 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.211 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.211 233728 DEBUG nova.network.neutron [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.305 233728 DEBUG nova.compute.manager [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received event network-changed-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.306 233728 DEBUG nova.compute.manager [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Refreshing instance network info cache due to event network-changed-a02576c9-0b5e-4f90-83fd-ced6081e0046. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.307 233728 DEBUG oslo_concurrency.lockutils [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:47.402 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:47.403 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:47 np0005539552 nova_compute[233724]: 2025-11-29 08:41:47.436 233728 DEBUG nova.network.neutron [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:41:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:48.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:49.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.367 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405694.3659832, dfa97913-59ab-4f13-87e6-48d356827763 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.368 233728 INFO nova.compute.manager [-] [instance: dfa97913-59ab-4f13-87e6-48d356827763] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.397 233728 DEBUG nova.compute.manager [None req-d30681bb-65f2-49e2-bd61-b4bc9b4937ec - - - - - -] [instance: dfa97913-59ab-4f13-87e6-48d356827763] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.400 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.580 233728 DEBUG nova.network.neutron [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Updating instance_info_cache with network_info: [{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.608 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.609 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Instance network_info: |[{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.609 233728 DEBUG oslo_concurrency.lockutils [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.610 233728 DEBUG nova.network.neutron [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Refreshing network info cache for port a02576c9-0b5e-4f90-83fd-ced6081e0046 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.615 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Start _get_guest_xml network_info=[{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.622 233728 WARNING nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.627 233728 DEBUG nova.virt.libvirt.host [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.628 233728 DEBUG nova.virt.libvirt.host [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.637 233728 DEBUG nova.virt.libvirt.host [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.639 233728 DEBUG nova.virt.libvirt.host [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.640 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.641 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.642 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.642 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.643 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.643 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.644 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.644 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.645 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.645 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.646 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.646 233728 DEBUG nova.virt.hardware [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:41:49 np0005539552 nova_compute[233724]: 2025-11-29 08:41:49.652 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3566495403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.163 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.202 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.206 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:50 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:50.405 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1750381294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.680 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.682 233728 DEBUG nova.virt.libvirt.vif [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1105356949',display_name='tempest-TestNetworkBasicOps-server-1105356949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1105356949',id=193,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqEYciRULp74zjg3snXvxh7woJYi3NZ4rX0ANZIImrhM1eju93i1R1OjYtKkB6EvWbwfKF3IThDx5Ws9/WhrZ+NQjmQ3Cjp/LZkjwp20L++PGi0tpsKdtiAZiAwdtitPw==',key_name='tempest-TestNetworkBasicOps-920075302',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-wxrwgub1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:44Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.683 233728 DEBUG nova.network.os_vif_util [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.683 233728 DEBUG nova.network.os_vif_util [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.684 233728 DEBUG nova.objects.instance [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.696 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <uuid>120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b</uuid>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <name>instance-000000c1</name>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkBasicOps-server-1105356949</nova:name>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:41:49</nova:creationTime>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <nova:port uuid="a02576c9-0b5e-4f90-83fd-ced6081e0046">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <entry name="serial">120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b</entry>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <entry name="uuid">120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b</entry>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk.config">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:9b:ac:79"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <target dev="tapa02576c9-0b"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/console.log" append="off"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:41:50 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:41:50 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:41:50 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:41:50 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.697 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Preparing to wait for external event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.697 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.697 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.697 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.698 233728 DEBUG nova.virt.libvirt.vif [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1105356949',display_name='tempest-TestNetworkBasicOps-server-1105356949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1105356949',id=193,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqEYciRULp74zjg3snXvxh7woJYi3NZ4rX0ANZIImrhM1eju93i1R1OjYtKkB6EvWbwfKF3IThDx5Ws9/WhrZ+NQjmQ3Cjp/LZkjwp20L++PGi0tpsKdtiAZiAwdtitPw==',key_name='tempest-TestNetworkBasicOps-920075302',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-wxrwgub1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:44Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.698 233728 DEBUG nova.network.os_vif_util [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.699 233728 DEBUG nova.network.os_vif_util [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.699 233728 DEBUG os_vif [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.700 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.701 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.701 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.705 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.705 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02576c9-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.705 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa02576c9-0b, col_values=(('external_ids', {'iface-id': 'a02576c9-0b5e-4f90-83fd-ced6081e0046', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:ac:79', 'vm-uuid': '120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539552 NetworkManager[48926]: <info>  [1764405710.7084] manager: (tapa02576c9-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.710 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.718 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.719 233728 INFO os_vif [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b')#033[00m
Nov 29 03:41:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:50.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.777 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.777 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.778 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:9b:ac:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.779 233728 INFO nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Using config drive#033[00m
Nov 29 03:41:50 np0005539552 nova_compute[233724]: 2025-11-29 08:41:50.822 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.218 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.218 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.524 233728 INFO nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Creating config drive at /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/disk.config#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.533 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpau93i9ek execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.690 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpau93i9ek" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.742 233728 DEBUG nova.storage.rbd_utils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.749 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/disk.config 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.946 233728 DEBUG oslo_concurrency.processutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/disk.config 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:51 np0005539552 nova_compute[233724]: 2025-11-29 08:41:51.948 233728 INFO nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Deleting local config drive /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:41:52 np0005539552 kernel: tapa02576c9-0b: entered promiscuous mode
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.0234] manager: (tapa02576c9-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.024 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:52Z|00877|binding|INFO|Claiming lport a02576c9-0b5e-4f90-83fd-ced6081e0046 for this chassis.
Nov 29 03:41:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:52Z|00878|binding|INFO|a02576c9-0b5e-4f90-83fd-ced6081e0046: Claiming fa:16:3e:9b:ac:79 10.100.0.9
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.037 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.045 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.053 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.0549] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.0571] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.057 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ac:79 10.100.0.9'], port_security=['fa:16:3e:9b:ac:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18d68586-48a1-4a55-b48c-8e6a0493b378', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce74417-6704-4d65-b5df-ee9ba219986e, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a02576c9-0b5e-4f90-83fd-ced6081e0046) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.058 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a02576c9-0b5e-4f90-83fd-ced6081e0046 in datapath 18d68586-48a1-4a55-b48c-8e6a0493b378 bound to our chassis#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.060 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18d68586-48a1-4a55-b48c-8e6a0493b378#033[00m
Nov 29 03:41:52 np0005539552 systemd-udevd[314640]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:52 np0005539552 systemd-machined[196379]: New machine qemu-89-instance-000000c1.
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.073 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4df242-3ec1-46f6-8df3-db1824d1f783]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.074 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18d68586-41 in ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.077 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18d68586-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.078 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a34d4c75-63e3-43af-9ad5-5c8c0c543df3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.078 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e9beebdc-1225-413a-9952-436745fee4e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.0894] device (tapa02576c9-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.0909] device (tapa02576c9-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.093 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[76787620-7351-43da-a9fb-7ef178b59059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 systemd[1]: Started Virtual Machine qemu-89-instance-000000c1.
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.123 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3fecec07-095b-4ac6-9be3-7e4e4c134e2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:41:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.180 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[65097a79-83fb-4688-8979-a0af132a1874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.188 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d00a7bbe-b923-4078-b74c-fa52c6ba8804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 systemd-udevd[314645]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.1923] manager: (tap18d68586-40): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.233 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[59714a12-2ae2-4938-9ca8-4b4607a15e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.236 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5032388a-c507-4212-8ed0-0df4bdf5e6c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.2621] device (tap18d68586-40): carrier: link connected
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.268 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[efae93b0-c07c-4447-93ea-fff9fb352056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.285 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2736ba40-6585-44dd-8e2d-aa50929a45a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18d68586-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:86:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875795, 'reachable_time': 16720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314674, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.303 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[18814f68-523d-408d-96ef-1f352852c80f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:8668'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 875795, 'tstamp': 875795}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314675, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.339 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.341 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.368 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:52Z|00879|binding|INFO|Setting lport a02576c9-0b5e-4f90-83fd-ced6081e0046 ovn-installed in OVS
Nov 29 03:41:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:52Z|00880|binding|INFO|Setting lport a02576c9-0b5e-4f90-83fd-ced6081e0046 up in Southbound
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.380 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.414 233728 DEBUG nova.network.neutron [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Updated VIF entry in instance network info cache for port a02576c9-0b5e-4f90-83fd-ced6081e0046. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.414 233728 DEBUG nova.network.neutron [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Updating instance_info_cache with network_info: [{"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.431 233728 DEBUG oslo_concurrency.lockutils [req-0699dc62-7b77-41dc-a000-6401518adad8 req-03e1b8f9-9a27-4e9f-970c-43b6da46c90b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.438 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[197c5a4b-bda1-4661-bc69-9c078b87895a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18d68586-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:86:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875795, 'reachable_time': 16720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314676, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.475 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c150e6f5-6815-4c22-bdab-cfbd35b5cb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.542 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405712.5417309, 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.542 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.565 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb40ecd-68d6-452c-89fe-686c621e3da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.567 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18d68586-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.568 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.568 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18d68586-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 kernel: tap18d68586-40: entered promiscuous mode
Nov 29 03:41:52 np0005539552 NetworkManager[48926]: <info>  [1764405712.5724] manager: (tap18d68586-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.576 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18d68586-40, col_values=(('external_ids', {'iface-id': 'd5416c31-8c19-445b-b33f-f0e9a55b063e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:52 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:52Z|00881|binding|INFO|Releasing lport d5416c31-8c19-445b-b33f-f0e9a55b063e from this chassis (sb_readonly=0)
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.608 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.613 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18d68586-48a1-4a55-b48c-8e6a0493b378.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18d68586-48a1-4a55-b48c-8e6a0493b378.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.615 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405712.543316, 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.615 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a85a8503-b389-4591-8a5d-356a7db78cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.616 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-18d68586-48a1-4a55-b48c-8e6a0493b378
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/18d68586-48a1-4a55-b48c-8e6a0493b378.pid.haproxy
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.616 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 18d68586-48a1-4a55-b48c-8e6a0493b378
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:41:52 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:52.618 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'env', 'PROCESS_TAG=haproxy-18d68586-48a1-4a55-b48c-8e6a0493b378', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18d68586-48a1-4a55-b48c-8e6a0493b378.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.647 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.652 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.673 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:52.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.753 233728 DEBUG nova.compute.manager [req-9f1da170-ecbc-40c9-a8dd-aa8e9864b1cc req-bff4c1d9-31b2-41fd-bd9b-97459aa23376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.754 233728 DEBUG oslo_concurrency.lockutils [req-9f1da170-ecbc-40c9-a8dd-aa8e9864b1cc req-bff4c1d9-31b2-41fd-bd9b-97459aa23376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.754 233728 DEBUG oslo_concurrency.lockutils [req-9f1da170-ecbc-40c9-a8dd-aa8e9864b1cc req-bff4c1d9-31b2-41fd-bd9b-97459aa23376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.754 233728 DEBUG oslo_concurrency.lockutils [req-9f1da170-ecbc-40c9-a8dd-aa8e9864b1cc req-bff4c1d9-31b2-41fd-bd9b-97459aa23376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.754 233728 DEBUG nova.compute.manager [req-9f1da170-ecbc-40c9-a8dd-aa8e9864b1cc req-bff4c1d9-31b2-41fd-bd9b-97459aa23376 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Processing event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.755 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.759 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405712.758994, 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.759 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.773 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.777 233728 INFO nova.virt.libvirt.driver [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Instance spawned successfully.#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.777 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.783 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.787 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.795 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.796 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.796 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.796 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.797 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.797 233728 DEBUG nova.virt.libvirt.driver [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.803 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.866 233728 INFO nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Took 8.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.867 233728 DEBUG nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.950 233728 INFO nova.compute.manager [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Took 9.44 seconds to build instance.#033[00m
Nov 29 03:41:52 np0005539552 nova_compute[233724]: 2025-11-29 08:41:52.963 233728 DEBUG oslo_concurrency.lockutils [None req-b73c56c4-6d91-4c06-acff-7b0f5d60aebf 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:53 np0005539552 podman[314751]: 2025-11-29 08:41:53.093810827 +0000 UTC m=+0.053761068 container create a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:41:53 np0005539552 systemd[1]: Started libpod-conmon-a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61.scope.
Nov 29 03:41:53 np0005539552 podman[314751]: 2025-11-29 08:41:53.066965905 +0000 UTC m=+0.026916156 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:41:53 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:41:53 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902a2879d44d864b75874ef5f839a70955b60e85dbb83a308f0bd701233c898b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:53 np0005539552 podman[314751]: 2025-11-29 08:41:53.194029805 +0000 UTC m=+0.153980086 container init a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:41:53 np0005539552 podman[314751]: 2025-11-29 08:41:53.204941539 +0000 UTC m=+0.164891810 container start a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:53 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [NOTICE]   (314769) : New worker (314771) forked
Nov 29 03:41:53 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [NOTICE]   (314769) : Loading success.
Nov 29 03:41:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:53 np0005539552 nova_compute[233724]: 2025-11-29 08:41:53.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.909 233728 DEBUG nova.compute.manager [req-a485511c-2f2a-4da6-a08b-9aaa71319359 req-37c147e1-c69c-4aad-b1db-1e2c0cfa1c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.910 233728 DEBUG oslo_concurrency.lockutils [req-a485511c-2f2a-4da6-a08b-9aaa71319359 req-37c147e1-c69c-4aad-b1db-1e2c0cfa1c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.910 233728 DEBUG oslo_concurrency.lockutils [req-a485511c-2f2a-4da6-a08b-9aaa71319359 req-37c147e1-c69c-4aad-b1db-1e2c0cfa1c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.911 233728 DEBUG oslo_concurrency.lockutils [req-a485511c-2f2a-4da6-a08b-9aaa71319359 req-37c147e1-c69c-4aad-b1db-1e2c0cfa1c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.911 233728 DEBUG nova.compute.manager [req-a485511c-2f2a-4da6-a08b-9aaa71319359 req-37c147e1-c69c-4aad-b1db-1e2c0cfa1c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] No waiting events found dispatching network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.911 233728 WARNING nova.compute.manager [req-a485511c-2f2a-4da6-a08b-9aaa71319359 req-37c147e1-c69c-4aad-b1db-1e2c0cfa1c60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received unexpected event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:54 np0005539552 nova_compute[233724]: 2025-11-29 08:41:54.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:55 np0005539552 nova_compute[233724]: 2025-11-29 08:41:55.708 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:55 np0005539552 nova_compute[233724]: 2025-11-29 08:41:55.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.032 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.033 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.034 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.034 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.035 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.037 233728 INFO nova.compute.manager [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Terminating instance#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.041 233728 DEBUG nova.compute.manager [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:56 np0005539552 kernel: tapa02576c9-0b (unregistering): left promiscuous mode
Nov 29 03:41:56 np0005539552 NetworkManager[48926]: <info>  [1764405716.0958] device (tapa02576c9-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:56Z|00882|binding|INFO|Releasing lport a02576c9-0b5e-4f90-83fd-ced6081e0046 from this chassis (sb_readonly=0)
Nov 29 03:41:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:56Z|00883|binding|INFO|Setting lport a02576c9-0b5e-4f90-83fd-ced6081e0046 down in Southbound
Nov 29 03:41:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:41:56Z|00884|binding|INFO|Removing iface tapa02576c9-0b ovn-installed in OVS
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.112 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:56.120 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ac:79 10.100.0.9'], port_security=['fa:16:3e:9b:ac:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18d68586-48a1-4a55-b48c-8e6a0493b378', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-666232535', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e80d8f2f-092d-4879-a27f-82d3e7dad8a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce74417-6704-4d65-b5df-ee9ba219986e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=a02576c9-0b5e-4f90-83fd-ced6081e0046) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:56.121 143400 INFO neutron.agent.ovn.metadata.agent [-] Port a02576c9-0b5e-4f90-83fd-ced6081e0046 in datapath 18d68586-48a1-4a55-b48c-8e6a0493b378 unbound from our chassis#033[00m
Nov 29 03:41:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:56.122 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18d68586-48a1-4a55-b48c-8e6a0493b378, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:56.124 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c54942e9-4afa-434d-97e4-8d9c27500846]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:56.124 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 namespace which is not needed anymore#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.151 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:56 np0005539552 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Nov 29 03:41:56 np0005539552 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c1.scope: Consumed 3.850s CPU time.
Nov 29 03:41:56 np0005539552 systemd-machined[196379]: Machine qemu-89-instance-000000c1 terminated.
Nov 29 03:41:56 np0005539552 NetworkManager[48926]: <info>  [1764405716.2636] manager: (tapa02576c9-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.282 233728 INFO nova.virt.libvirt.driver [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Instance destroyed successfully.#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.283 233728 DEBUG nova.objects.instance [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.297 233728 DEBUG nova.virt.libvirt.vif [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:41:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1105356949',display_name='tempest-TestNetworkBasicOps-server-1105356949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1105356949',id=193,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqEYciRULp74zjg3snXvxh7woJYi3NZ4rX0ANZIImrhM1eju93i1R1OjYtKkB6EvWbwfKF3IThDx5Ws9/WhrZ+NQjmQ3Cjp/LZkjwp20L++PGi0tpsKdtiAZiAwdtitPw==',key_name='tempest-TestNetworkBasicOps-920075302',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-wxrwgub1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:41:52Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.298 233728 DEBUG nova.network.os_vif_util [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "address": "fa:16:3e:9b:ac:79", "network": {"id": "18d68586-48a1-4a55-b48c-8e6a0493b378", "bridge": "br-int", "label": "tempest-network-smoke--1460638043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa02576c9-0b", "ovs_interfaceid": "a02576c9-0b5e-4f90-83fd-ced6081e0046", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.299 233728 DEBUG nova.network.os_vif_util [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.300 233728 DEBUG os_vif [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.303 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.303 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02576c9-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.306 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.309 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:56 np0005539552 nova_compute[233724]: 2025-11-29 08:41:56.313 233728 INFO os_vif [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ac:79,bridge_name='br-int',has_traffic_filtering=True,id=a02576c9-0b5e-4f90-83fd-ced6081e0046,network=Network(18d68586-48a1-4a55-b48c-8e6a0493b378),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa02576c9-0b')#033[00m
Nov 29 03:41:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.005000135s ======
Nov 29 03:41:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000135s
Nov 29 03:41:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:57 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [NOTICE]   (314769) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:57 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [NOTICE]   (314769) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:57 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [WARNING]  (314769) : Exiting Master process...
Nov 29 03:41:57 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [WARNING]  (314769) : Exiting Master process...
Nov 29 03:41:57 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [ALERT]    (314769) : Current worker (314771) exited with code 143 (Terminated)
Nov 29 03:41:57 np0005539552 neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378[314765]: [WARNING]  (314769) : All workers exited. Exiting... (0)
Nov 29 03:41:57 np0005539552 systemd[1]: libpod-a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61.scope: Deactivated successfully.
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.135 233728 DEBUG nova.compute.manager [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received event network-vif-unplugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.136 233728 DEBUG oslo_concurrency.lockutils [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.137 233728 DEBUG oslo_concurrency.lockutils [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:57 np0005539552 podman[314805]: 2025-11-29 08:41:57.137495163 +0000 UTC m=+0.872364133 container died a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.137 233728 DEBUG oslo_concurrency.lockutils [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.138 233728 DEBUG nova.compute.manager [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] No waiting events found dispatching network-vif-unplugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.138 233728 DEBUG nova.compute.manager [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received event network-vif-unplugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.138 233728 DEBUG nova.compute.manager [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.139 233728 DEBUG oslo_concurrency.lockutils [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.139 233728 DEBUG oslo_concurrency.lockutils [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.140 233728 DEBUG oslo_concurrency.lockutils [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.140 233728 DEBUG nova.compute.manager [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] No waiting events found dispatching network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.141 233728 WARNING nova.compute.manager [req-8b588dfc-0cc4-479e-91f9-04a2b364adad req-b52bf595-e3d6-458d-9b7c-ed0725cdf919 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Received unexpected event network-vif-plugged-a02576c9-0b5e-4f90-83fd-ced6081e0046 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:41:57 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:57 np0005539552 systemd[1]: var-lib-containers-storage-overlay-902a2879d44d864b75874ef5f839a70955b60e85dbb83a308f0bd701233c898b-merged.mount: Deactivated successfully.
Nov 29 03:41:57 np0005539552 podman[314805]: 2025-11-29 08:41:57.1923718 +0000 UTC m=+0.927240730 container cleanup a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:41:57 np0005539552 systemd[1]: libpod-conmon-a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61.scope: Deactivated successfully.
Nov 29 03:41:57 np0005539552 podman[314862]: 2025-11-29 08:41:57.287952763 +0000 UTC m=+0.060044397 container remove a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.298 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6941cac6-6d48-4264-89a8-2fdc87ff1fb9]: (4, ('Sat Nov 29 08:41:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 (a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61)\na3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61\nSat Nov 29 08:41:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 (a3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61)\na3a78952027a8bb2c83cf88328ee15b5d10d069e42d0e8515221f1f5cb8f0d61\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.301 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[86f83898-d5a0-47a0-b8e2-3ae2dc4296c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.302 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18d68586-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:57 np0005539552 kernel: tap18d68586-40: left promiscuous mode
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.305 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.312 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[976c9104-8c94-4f62-b071-3ba3d439aaae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.326 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.334 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eba7f85c-0797-4f88-bb1c-e19af656862b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.335 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[746a057a-8cb2-48e8-b66b-509f9f1db860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.342 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.360 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[11ca5618-013f-4558-81fc-9b2d4cb00224]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 875784, 'reachable_time': 30049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314879, 'error': None, 'target': 'ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.363 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18d68586-48a1-4a55-b48c-8e6a0493b378 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:41:57.364 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bded95-bccd-4a17-8f98-85796c2638f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539552 systemd[1]: run-netns-ovnmeta\x2d18d68586\x2d48a1\x2d4a55\x2db48c\x2d8e6a0493b378.mount: Deactivated successfully.
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.826 233728 INFO nova.virt.libvirt.driver [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Deleting instance files /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_del#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.827 233728 INFO nova.virt.libvirt.driver [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Deletion of /var/lib/nova/instances/120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b_del complete#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.883 233728 INFO nova.compute.manager [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Took 1.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.884 233728 DEBUG oslo.service.loopingcall [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.885 233728 DEBUG nova.compute.manager [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:57 np0005539552 nova_compute[233724]: 2025-11-29 08:41:57.885 233728 DEBUG nova.network.neutron [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:58 np0005539552 nova_compute[233724]: 2025-11-29 08:41:58.956 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:41:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:59.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:59 np0005539552 nova_compute[233724]: 2025-11-29 08:41:59.486 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539552 nova_compute[233724]: 2025-11-29 08:41:59.795 233728 DEBUG nova.network.neutron [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:59 np0005539552 nova_compute[233724]: 2025-11-29 08:41:59.820 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539552 nova_compute[233724]: 2025-11-29 08:41:59.823 233728 INFO nova.compute.manager [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Took 1.94 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:59 np0005539552 nova_compute[233724]: 2025-11-29 08:41:59.881 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:59 np0005539552 nova_compute[233724]: 2025-11-29 08:41:59.882 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:00.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:00 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:42:00 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:42:00 np0005539552 nova_compute[233724]: 2025-11-29 08:42:00.804 233728 DEBUG oslo_concurrency.processutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:00 np0005539552 nova_compute[233724]: 2025-11-29 08:42:00.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:00 np0005539552 nova_compute[233724]: 2025-11-29 08:42:00.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:42:00 np0005539552 nova_compute[233724]: 2025-11-29 08:42:00.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:42:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.111 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.111 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.112 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.112 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/412094224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.307 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.343 233728 DEBUG oslo_concurrency.processutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.346 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.355 233728 DEBUG nova.compute.provider_tree [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.374 233728 DEBUG nova.scheduler.client.report [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.409 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.528 233728 INFO nova.scheduler.client.report [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.604 233728 DEBUG oslo_concurrency.lockutils [None req-af389d9c-747d-494d-addf-c152f156e5f2 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.810 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.837 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:01 np0005539552 nova_compute[233724]: 2025-11-29 08:42:01.837 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:42:02 np0005539552 nova_compute[233724]: 2025-11-29 08:42:02.343 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:02.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:05.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:06 np0005539552 nova_compute[233724]: 2025-11-29 08:42:06.310 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:06.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:07 np0005539552 nova_compute[233724]: 2025-11-29 08:42:07.346 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:08.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:09 np0005539552 podman[314962]: 2025-11-29 08:42:09.008581593 +0000 UTC m=+0.079664555 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:42:09 np0005539552 podman[314964]: 2025-11-29 08:42:09.018322896 +0000 UTC m=+0.084139146 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:42:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:09 np0005539552 podman[314966]: 2025-11-29 08:42:09.055507447 +0000 UTC m=+0.120922906 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:42:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:10.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:11 np0005539552 nova_compute[233724]: 2025-11-29 08:42:11.280 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405716.2788618, 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:11 np0005539552 nova_compute[233724]: 2025-11-29 08:42:11.280 233728 INFO nova.compute.manager [-] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:11 np0005539552 nova_compute[233724]: 2025-11-29 08:42:11.308 233728 DEBUG nova.compute.manager [None req-739f1720-07db-49ee-bcb7-69ec294baf98 - - - - - -] [instance: 120c7ff2-c12c-42e3-b3c5-c2cb0b648e8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:11 np0005539552 nova_compute[233724]: 2025-11-29 08:42:11.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:12 np0005539552 nova_compute[233724]: 2025-11-29 08:42:12.349 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.774 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.774 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.791 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.900 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.900 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.914 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:42:13 np0005539552 nova_compute[233724]: 2025-11-29 08:42:13.915 233728 INFO nova.compute.claims [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.007 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3682938772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.500 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.509 233728 DEBUG nova.compute.provider_tree [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.525 233728 DEBUG nova.scheduler.client.report [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.552 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.553 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.609 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.610 233728 DEBUG nova.network.neutron [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.644 233728 INFO nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.667 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.767 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.769 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.769 233728 INFO nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Creating image(s)#033[00m
Nov 29 03:42:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:14.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.816 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.851 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.881 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.884 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.981 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.982 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.982 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:14 np0005539552 nova_compute[233724]: 2025-11-29 08:42:14.983 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.010 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.013 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.364 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.426 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] resizing rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.538 233728 DEBUG nova.objects.instance [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'migration_context' on Instance uuid d84a455a-c2a7-42a9-a238-d9d04a06df6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.554 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.555 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Ensure instance console log exists: /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.555 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.555 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.556 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:15 np0005539552 nova_compute[233724]: 2025-11-29 08:42:15.566 233728 DEBUG nova.network.neutron [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Successfully created port: 9630069d-a220-4b0b-b510-0bfaf2de646d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.311 233728 DEBUG nova.network.neutron [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Successfully updated port: 9630069d-a220-4b0b-b510-0bfaf2de646d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.315 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.338 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "refresh_cache-d84a455a-c2a7-42a9-a238-d9d04a06df6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.338 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquired lock "refresh_cache-d84a455a-c2a7-42a9-a238-d9d04a06df6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.338 233728 DEBUG nova.network.neutron [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.434 233728 DEBUG nova.compute.manager [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-changed-9630069d-a220-4b0b-b510-0bfaf2de646d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.434 233728 DEBUG nova.compute.manager [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Refreshing instance network info cache due to event network-changed-9630069d-a220-4b0b-b510-0bfaf2de646d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.435 233728 DEBUG oslo_concurrency.lockutils [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d84a455a-c2a7-42a9-a238-d9d04a06df6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:16 np0005539552 nova_compute[233724]: 2025-11-29 08:42:16.545 233728 DEBUG nova.network.neutron [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:42:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:16.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.353 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.435 233728 DEBUG nova.network.neutron [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Updating instance_info_cache with network_info: [{"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.451 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Releasing lock "refresh_cache-d84a455a-c2a7-42a9-a238-d9d04a06df6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.451 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Instance network_info: |[{"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.451 233728 DEBUG oslo_concurrency.lockutils [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d84a455a-c2a7-42a9-a238-d9d04a06df6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.451 233728 DEBUG nova.network.neutron [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Refreshing network info cache for port 9630069d-a220-4b0b-b510-0bfaf2de646d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.453 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Start _get_guest_xml network_info=[{"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.459 233728 WARNING nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.464 233728 DEBUG nova.virt.libvirt.host [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.465 233728 DEBUG nova.virt.libvirt.host [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.469 233728 DEBUG nova.virt.libvirt.host [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.469 233728 DEBUG nova.virt.libvirt.host [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.470 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.470 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.471 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.471 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.471 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.471 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.471 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.472 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.472 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.472 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.472 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.472 233728 DEBUG nova.virt.hardware [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.475 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:17 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:17 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/539662161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:17 np0005539552 nova_compute[233724]: 2025-11-29 08:42:17.970 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.015 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.021 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2124172508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.502 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.505 233728 DEBUG nova.virt.libvirt.vif [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-736945795',display_name='tempest-TestServerMultinode-server-736945795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-736945795',id=195,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-mtmw80o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:14Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=d84a455a-c2a7-42a9-a238-d9d04a06df6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.506 233728 DEBUG nova.network.os_vif_util [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.507 233728 DEBUG nova.network.os_vif_util [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.509 233728 DEBUG nova.objects.instance [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'pci_devices' on Instance uuid d84a455a-c2a7-42a9-a238-d9d04a06df6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.548 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <uuid>d84a455a-c2a7-42a9-a238-d9d04a06df6a</uuid>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <name>instance-000000c3</name>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestServerMultinode-server-736945795</nova:name>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:42:17</nova:creationTime>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:user uuid="dda7b3867e5c45a7bb78d049103bc095">tempest-TestServerMultinode-1895688433-project-admin</nova:user>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:project uuid="6519f321a4954567ab99a11cc07cc5ac">tempest-TestServerMultinode-1895688433</nova:project>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <nova:port uuid="9630069d-a220-4b0b-b510-0bfaf2de646d">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <entry name="serial">d84a455a-c2a7-42a9-a238-d9d04a06df6a</entry>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <entry name="uuid">d84a455a-c2a7-42a9-a238-d9d04a06df6a</entry>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk.config">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:88:4b:27"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <target dev="tap9630069d-a2"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/console.log" append="off"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:42:18 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:42:18 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:42:18 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:42:18 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.552 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Preparing to wait for external event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.552 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.553 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.554 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.556 233728 DEBUG nova.virt.libvirt.vif [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-736945795',display_name='tempest-TestServerMultinode-server-736945795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-736945795',id=195,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-mtmw80o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:14Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=d84a455a-c2a7-42a9-a238-d9d04a06df6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.556 233728 DEBUG nova.network.os_vif_util [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.558 233728 DEBUG nova.network.os_vif_util [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.559 233728 DEBUG os_vif [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.560 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.561 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.563 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.569 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.569 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9630069d-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.570 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9630069d-a2, col_values=(('external_ids', {'iface-id': '9630069d-a220-4b0b-b510-0bfaf2de646d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:4b:27', 'vm-uuid': 'd84a455a-c2a7-42a9-a238-d9d04a06df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.572 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539552 NetworkManager[48926]: <info>  [1764405738.5742] manager: (tap9630069d-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.582 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.584 233728 INFO os_vif [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2')#033[00m
Nov 29 03:42:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.717 233728 DEBUG nova.network.neutron [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Updated VIF entry in instance network info cache for port 9630069d-a220-4b0b-b510-0bfaf2de646d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.718 233728 DEBUG nova.network.neutron [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Updating instance_info_cache with network_info: [{"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.754 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.755 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.756 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No VIF found with MAC fa:16:3e:88:4b:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.757 233728 INFO nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Using config drive#033[00m
Nov 29 03:42:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:18.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.801 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:18 np0005539552 nova_compute[233724]: 2025-11-29 08:42:18.811 233728 DEBUG oslo_concurrency.lockutils [req-d6f7b927-7e24-42ac-9d98-2045c184055c req-201d4acf-1f5a-4670-ba96-7fb93bb85238 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d84a455a-c2a7-42a9-a238-d9d04a06df6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:19.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.143 233728 INFO nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Creating config drive at /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/disk.config#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.154 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz5reki7d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.318 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz5reki7d" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.359 233728 DEBUG nova.storage.rbd_utils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.365 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/disk.config d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.702 233728 DEBUG oslo_concurrency.processutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/disk.config d84a455a-c2a7-42a9-a238-d9d04a06df6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.703 233728 INFO nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Deleting local config drive /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:42:19 np0005539552 kernel: tap9630069d-a2: entered promiscuous mode
Nov 29 03:42:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:19Z|00885|binding|INFO|Claiming lport 9630069d-a220-4b0b-b510-0bfaf2de646d for this chassis.
Nov 29 03:42:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:19Z|00886|binding|INFO|9630069d-a220-4b0b-b510-0bfaf2de646d: Claiming fa:16:3e:88:4b:27 10.100.0.8
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.771 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539552 NetworkManager[48926]: <info>  [1764405739.7728] manager: (tap9630069d-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.774 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.794 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:4b:27 10.100.0.8'], port_security=['fa:16:3e:88:4b:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd84a455a-c2a7-42a9-a238-d9d04a06df6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379eb72a-ec90-4461-897a-adab6a88928f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6519f321a4954567ab99a11cc07cc5ac', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7ac3748-9331-4cc2-bcd0-273842e7e38b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02c1f455-8d7e-4f22-83b6-df0a05597294, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=9630069d-a220-4b0b-b510-0bfaf2de646d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.795 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 9630069d-a220-4b0b-b510-0bfaf2de646d in datapath 379eb72a-ec90-4461-897a-adab6a88928f bound to our chassis#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.797 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 379eb72a-ec90-4461-897a-adab6a88928f#033[00m
Nov 29 03:42:19 np0005539552 systemd-machined[196379]: New machine qemu-90-instance-000000c3.
Nov 29 03:42:19 np0005539552 systemd-udevd[315407]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.813 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd222db-df20-4b44-8161-29579924ca60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.814 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap379eb72a-e1 in ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:42:19 np0005539552 NetworkManager[48926]: <info>  [1764405739.8160] device (tap9630069d-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.816 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap379eb72a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.816 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5095b434-abb0-49ba-813a-2f5c8b7c3386]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 NetworkManager[48926]: <info>  [1764405739.8172] device (tap9630069d-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.817 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d93c476c-d2a8-47de-aecb-b8356f693c51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.830 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[5539aa11-8119-4fd8-be9c-c4bc2f0ca0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.839 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539552 systemd[1]: Started Virtual Machine qemu-90-instance-000000c3.
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.845 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.846 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4357a5-947f-4b10-a794-7d7e924532f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:19Z|00887|binding|INFO|Setting lport 9630069d-a220-4b0b-b510-0bfaf2de646d ovn-installed in OVS
Nov 29 03:42:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:19Z|00888|binding|INFO|Setting lport 9630069d-a220-4b0b-b510-0bfaf2de646d up in Southbound
Nov 29 03:42:19 np0005539552 nova_compute[233724]: 2025-11-29 08:42:19.849 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.878 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0222aeca-dfba-4853-9db9-bd04c23897ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.883 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[387a7174-2b24-4d26-babe-ec1b8b428dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 NetworkManager[48926]: <info>  [1764405739.8843] manager: (tap379eb72a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/393)
Nov 29 03:42:19 np0005539552 systemd-udevd[315410]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.918 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[a2be1566-a818-4966-b6bd-ad9905f6239f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.922 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[b42aa074-51f0-4ab3-98b3-cb9b35c2be8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 NetworkManager[48926]: <info>  [1764405739.9493] device (tap379eb72a-e0): carrier: link connected
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.955 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5b9376-d63b-4a23-8439-bdefcdd71d96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.977 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5ff218-7a4a-4c9a-a7f5-bc4ecafb0c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap379eb72a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:58:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878563, 'reachable_time': 17325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315440, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:19.997 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e6b7fd-b375-46bd-8a16-ab44a1cdf07c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:585f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878563, 'tstamp': 878563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315441, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.017 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[03824c0b-5844-45d4-a74b-adf530df06f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap379eb72a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:58:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878563, 'reachable_time': 17325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315442, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.053 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f3a95e-a6e8-4903-a771-a4f72fcf8a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.120 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f1404bf9-dd81-4e12-90a9-9ddd4d12862b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379eb72a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.123 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap379eb72a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:20 np0005539552 NetworkManager[48926]: <info>  [1764405740.1261] manager: (tap379eb72a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Nov 29 03:42:20 np0005539552 kernel: tap379eb72a-e0: entered promiscuous mode
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.128 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap379eb72a-e0, col_values=(('external_ids', {'iface-id': '788af420-8eef-4a56-95d5-80ebf4f9f71c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.125 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:20Z|00889|binding|INFO|Releasing lport 788af420-8eef-4a56-95d5-80ebf4f9f71c from this chassis (sb_readonly=0)
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.129 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.148 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.149 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed62fc1-e4b1-4617-aacd-cc8ab6529eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.150 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-379eb72a-ec90-4461-897a-adab6a88928f
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 379eb72a-ec90-4461-897a-adab6a88928f
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.151 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'env', 'PROCESS_TAG=haproxy-379eb72a-ec90-4461-897a-adab6a88928f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/379eb72a-ec90-4461-897a-adab6a88928f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.207 233728 DEBUG nova.compute.manager [req-7e66b0d1-f8e5-49a2-8a5e-df24083545c9 req-8278bea3-e5b6-49cd-8954-6c573f5e1936 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.214 233728 DEBUG oslo_concurrency.lockutils [req-7e66b0d1-f8e5-49a2-8a5e-df24083545c9 req-8278bea3-e5b6-49cd-8954-6c573f5e1936 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.215 233728 DEBUG oslo_concurrency.lockutils [req-7e66b0d1-f8e5-49a2-8a5e-df24083545c9 req-8278bea3-e5b6-49cd-8954-6c573f5e1936 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.215 233728 DEBUG oslo_concurrency.lockutils [req-7e66b0d1-f8e5-49a2-8a5e-df24083545c9 req-8278bea3-e5b6-49cd-8954-6c573f5e1936 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.216 233728 DEBUG nova.compute.manager [req-7e66b0d1-f8e5-49a2-8a5e-df24083545c9 req-8278bea3-e5b6-49cd-8954-6c573f5e1936 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Processing event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:42:20 np0005539552 podman[315474]: 2025-11-29 08:42:20.60688251 +0000 UTC m=+0.065945456 container create bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.648 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539552 podman[315474]: 2025-11-29 08:42:20.569093083 +0000 UTC m=+0.028156109 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:42:20 np0005539552 systemd[1]: Started libpod-conmon-bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4.scope.
Nov 29 03:42:20 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:42:20 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fbc332185bb8f2e313e68a678bf3cf796ded1a95ee76213f2f4cb5e4ac91e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:20 np0005539552 podman[315474]: 2025-11-29 08:42:20.725163174 +0000 UTC m=+0.184226150 container init bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:20 np0005539552 podman[315474]: 2025-11-29 08:42:20.732604045 +0000 UTC m=+0.191666991 container start bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:42:20 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [NOTICE]   (315509) : New worker (315512) forked
Nov 29 03:42:20 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [NOTICE]   (315509) : Loading success.
Nov 29 03:42:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:20.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.942 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405740.9419885, d84a455a-c2a7-42a9-a238-d9d04a06df6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.942 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.945 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.949 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.953 233728 INFO nova.virt.libvirt.driver [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Instance spawned successfully.#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.953 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.970 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.980 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.987 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.988 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.989 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.990 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.991 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:20 np0005539552 nova_compute[233724]: 2025-11-29 08:42:20.991 233728 DEBUG nova.virt.libvirt.driver [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.020 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.021 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405740.9422302, d84a455a-c2a7-42a9-a238-d9d04a06df6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.021 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:42:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.053 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.057 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405740.9472988, d84a455a-c2a7-42a9-a238-d9d04a06df6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.057 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.068 233728 INFO nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Took 6.30 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.069 233728 DEBUG nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.079 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.082 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.102 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.135 233728 INFO nova.compute.manager [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Took 7.28 seconds to build instance.#033[00m
Nov 29 03:42:21 np0005539552 nova_compute[233724]: 2025-11-29 08:42:21.162 233728 DEBUG oslo_concurrency.lockutils [None req-08f64b26-69e5-4409-966c-e763c333e94e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:22 np0005539552 nova_compute[233724]: 2025-11-29 08:42:22.356 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:23 np0005539552 nova_compute[233724]: 2025-11-29 08:42:23.573 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:24.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:25 np0005539552 nova_compute[233724]: 2025-11-29 08:42:25.503 233728 DEBUG nova.compute.manager [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:25 np0005539552 nova_compute[233724]: 2025-11-29 08:42:25.505 233728 DEBUG oslo_concurrency.lockutils [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:25 np0005539552 nova_compute[233724]: 2025-11-29 08:42:25.506 233728 DEBUG oslo_concurrency.lockutils [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:25 np0005539552 nova_compute[233724]: 2025-11-29 08:42:25.506 233728 DEBUG oslo_concurrency.lockutils [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:25 np0005539552 nova_compute[233724]: 2025-11-29 08:42:25.507 233728 DEBUG nova.compute.manager [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] No waiting events found dispatching network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:25 np0005539552 nova_compute[233724]: 2025-11-29 08:42:25.507 233728 WARNING nova.compute.manager [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received unexpected event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:42:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:26.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:27 np0005539552 nova_compute[233724]: 2025-11-29 08:42:27.358 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539552 nova_compute[233724]: 2025-11-29 08:42:28.575 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:28.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:32 np0005539552 nova_compute[233724]: 2025-11-29 08:42:32.359 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:32.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:33 np0005539552 nova_compute[233724]: 2025-11-29 08:42:33.577 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:34Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:4b:27 10.100.0.8
Nov 29 03:42:34 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:34Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:4b:27 10.100.0.8
Nov 29 03:42:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:34.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:35.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:36.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:37 np0005539552 nova_compute[233724]: 2025-11-29 08:42:37.361 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:38 np0005539552 nova_compute[233724]: 2025-11-29 08:42:38.579 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:42:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558720566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:42:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:42:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558720566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:42:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:40 np0005539552 podman[315612]: 2025-11-29 08:42:40.009979943 +0000 UTC m=+0.087675751 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:42:40 np0005539552 podman[315613]: 2025-11-29 08:42:40.039703053 +0000 UTC m=+0.111283817 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:42:40 np0005539552 podman[315614]: 2025-11-29 08:42:40.080577803 +0000 UTC m=+0.146811903 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:42:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.162 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.163 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.163 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.164 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.164 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.166 233728 INFO nova.compute.manager [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Terminating instance#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.167 233728 DEBUG nova.compute.manager [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:41 np0005539552 kernel: tap9630069d-a2 (unregistering): left promiscuous mode
Nov 29 03:42:41 np0005539552 NetworkManager[48926]: <info>  [1764405761.2628] device (tap9630069d-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.280 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:41Z|00890|binding|INFO|Releasing lport 9630069d-a220-4b0b-b510-0bfaf2de646d from this chassis (sb_readonly=0)
Nov 29 03:42:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:41Z|00891|binding|INFO|Setting lport 9630069d-a220-4b0b-b510-0bfaf2de646d down in Southbound
Nov 29 03:42:41 np0005539552 ovn_controller[133798]: 2025-11-29T08:42:41Z|00892|binding|INFO|Removing iface tap9630069d-a2 ovn-installed in OVS
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.282 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.292 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:4b:27 10.100.0.8'], port_security=['fa:16:3e:88:4b:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd84a455a-c2a7-42a9-a238-d9d04a06df6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379eb72a-ec90-4461-897a-adab6a88928f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6519f321a4954567ab99a11cc07cc5ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7ac3748-9331-4cc2-bcd0-273842e7e38b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02c1f455-8d7e-4f22-83b6-df0a05597294, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=9630069d-a220-4b0b-b510-0bfaf2de646d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.295 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 9630069d-a220-4b0b-b510-0bfaf2de646d in datapath 379eb72a-ec90-4461-897a-adab6a88928f unbound from our chassis#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.298 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 379eb72a-ec90-4461-897a-adab6a88928f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.300 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fa63c718-cd77-418b-8164-e262a67df5bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.301 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f namespace which is not needed anymore#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.319 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Nov 29 03:42:41 np0005539552 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c3.scope: Consumed 14.344s CPU time.
Nov 29 03:42:41 np0005539552 systemd-machined[196379]: Machine qemu-90-instance-000000c3 terminated.
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.410 233728 INFO nova.virt.libvirt.driver [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Instance destroyed successfully.#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.411 233728 DEBUG nova.objects.instance [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'resources' on Instance uuid d84a455a-c2a7-42a9-a238-d9d04a06df6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.427 233728 DEBUG nova.virt.libvirt.vif [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:42:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-736945795',display_name='tempest-TestServerMultinode-server-736945795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-736945795',id=195,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-mtmw80o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:21Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=d84a455a-c2a7-42a9-a238-d9d04a06df6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.428 233728 DEBUG nova.network.os_vif_util [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "9630069d-a220-4b0b-b510-0bfaf2de646d", "address": "fa:16:3e:88:4b:27", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9630069d-a2", "ovs_interfaceid": "9630069d-a220-4b0b-b510-0bfaf2de646d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.429 233728 DEBUG nova.network.os_vif_util [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.429 233728 DEBUG os_vif [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.432 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.432 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9630069d-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.435 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.437 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.440 233728 INFO os_vif [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:4b:27,bridge_name='br-int',has_traffic_filtering=True,id=9630069d-a220-4b0b-b510-0bfaf2de646d,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9630069d-a2')#033[00m
Nov 29 03:42:41 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [NOTICE]   (315509) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:41 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [NOTICE]   (315509) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:41 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [WARNING]  (315509) : Exiting Master process...
Nov 29 03:42:41 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [WARNING]  (315509) : Exiting Master process...
Nov 29 03:42:41 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [ALERT]    (315509) : Current worker (315512) exited with code 143 (Terminated)
Nov 29 03:42:41 np0005539552 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[315490]: [WARNING]  (315509) : All workers exited. Exiting... (0)
Nov 29 03:42:41 np0005539552 systemd[1]: libpod-bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4.scope: Deactivated successfully.
Nov 29 03:42:41 np0005539552 podman[315710]: 2025-11-29 08:42:41.506192077 +0000 UTC m=+0.068955947 container died bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:42:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:41 np0005539552 systemd[1]: var-lib-containers-storage-overlay-f9fbc332185bb8f2e313e68a678bf3cf796ded1a95ee76213f2f4cb5e4ac91e7-merged.mount: Deactivated successfully.
Nov 29 03:42:41 np0005539552 podman[315710]: 2025-11-29 08:42:41.545782563 +0000 UTC m=+0.108546393 container cleanup bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:42:41 np0005539552 systemd[1]: libpod-conmon-bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4.scope: Deactivated successfully.
Nov 29 03:42:41 np0005539552 podman[315755]: 2025-11-29 08:42:41.617251197 +0000 UTC m=+0.049504094 container remove bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.628 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7f31aff5-2bca-450a-bd7d-a00aefb99069]: (4, ('Sat Nov 29 08:42:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f (bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4)\nbdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4\nSat Nov 29 08:42:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f (bdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4)\nbdb25ceafa5deceb79443814cab136b650286ff33f1becc2d1f72582a2bfb6f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.630 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[051974cd-200a-4dd9-a192-428e8a70573e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.632 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379eb72a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.634 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 kernel: tap379eb72a-e0: left promiscuous mode
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.652 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.656 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[700753aa-7dd8-427c-89c2-1402b7866f72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.674 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b423a491-882b-4c7c-b8e9-33a146704414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.675 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d7510ad8-36ed-4ada-892d-aa6e1ebb0486]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.695 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e63c8d02-5f73-4f7c-b75a-8dd35a77446a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878556, 'reachable_time': 32828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315770, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 systemd[1]: run-netns-ovnmeta\x2d379eb72a\x2dec90\x2d4461\x2d897a\x2dadab6a88928f.mount: Deactivated successfully.
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.700 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:41 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:41.700 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b3414a99-09f9-4328-97c0-af5b58e0db1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.880 233728 INFO nova.virt.libvirt.driver [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Deleting instance files /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a_del#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.881 233728 INFO nova.virt.libvirt.driver [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Deletion of /var/lib/nova/instances/d84a455a-c2a7-42a9-a238-d9d04a06df6a_del complete#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.956 233728 INFO nova.compute.manager [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.957 233728 DEBUG oslo.service.loopingcall [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.958 233728 DEBUG nova.compute.manager [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:41 np0005539552 nova_compute[233724]: 2025-11-29 08:42:41.958 233728 DEBUG nova.network.neutron [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.363 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.764 233728 DEBUG nova.compute.manager [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-vif-unplugged-9630069d-a220-4b0b-b510-0bfaf2de646d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.765 233728 DEBUG oslo_concurrency.lockutils [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.766 233728 DEBUG oslo_concurrency.lockutils [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.766 233728 DEBUG oslo_concurrency.lockutils [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.766 233728 DEBUG nova.compute.manager [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] No waiting events found dispatching network-vif-unplugged-9630069d-a220-4b0b-b510-0bfaf2de646d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.767 233728 DEBUG nova.compute.manager [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-vif-unplugged-9630069d-a220-4b0b-b510-0bfaf2de646d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.767 233728 DEBUG nova.compute.manager [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.768 233728 DEBUG oslo_concurrency.lockutils [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.768 233728 DEBUG oslo_concurrency.lockutils [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.768 233728 DEBUG oslo_concurrency.lockutils [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.769 233728 DEBUG nova.compute.manager [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] No waiting events found dispatching network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:42 np0005539552 nova_compute[233724]: 2025-11-29 08:42:42.769 233728 WARNING nova.compute.manager [req-518bdf54-a77c-44d5-a1dc-9a598e7c6d07 req-334160f1-3fcf-47cf-903d-2956b0edacdb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received unexpected event network-vif-plugged-9630069d-a220-4b0b-b510-0bfaf2de646d for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:42:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:42.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.003000081s ======
Nov 29 03:42:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.366 233728 DEBUG nova.network.neutron [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.402 233728 INFO nova.compute.manager [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Took 1.44 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.459 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.460 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.529 233728 DEBUG oslo_concurrency.processutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.927 233728 DEBUG nova.compute.manager [req-523f97e6-e1c9-4e8d-931a-4a8ca77e0001 req-79b4fe5b-ecbb-460b-83d7-5168ec07dba1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Received event network-vif-deleted-9630069d-a220-4b0b-b510-0bfaf2de646d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3846536534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.962 233728 DEBUG oslo_concurrency.processutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.970 233728 DEBUG nova.compute.provider_tree [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:43 np0005539552 nova_compute[233724]: 2025-11-29 08:42:43.992 233728 DEBUG nova.scheduler.client.report [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.012 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.053 233728 INFO nova.scheduler.client.report [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Deleted allocations for instance d84a455a-c2a7-42a9-a238-d9d04a06df6a#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.106 233728 DEBUG oslo_concurrency.lockutils [None req-ce745e8b-593d-480f-a943-ccd61df74d6a dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "d84a455a-c2a7-42a9-a238-d9d04a06df6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:44.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.962 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.963 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.963 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.964 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:42:44 np0005539552 nova_compute[233724]: 2025-11-29 08:42:44.964 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2763502750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.441 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.569045) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765569138, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1346, "num_deletes": 260, "total_data_size": 2614057, "memory_usage": 2652560, "flush_reason": "Manual Compaction"}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765588518, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1722866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65588, "largest_seqno": 66929, "table_properties": {"data_size": 1717243, "index_size": 2890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13197, "raw_average_key_size": 20, "raw_value_size": 1705380, "raw_average_value_size": 2599, "num_data_blocks": 127, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405670, "oldest_key_time": 1764405670, "file_creation_time": 1764405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 19525 microseconds, and 10342 cpu microseconds.
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.588575) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1722866 bytes OK
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.588594) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.590669) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.590687) EVENT_LOG_v1 {"time_micros": 1764405765590681, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.590708) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2607603, prev total WAL file size 2607603, number of live WAL files 2.
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.591551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323633' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1682KB)], [132(10MB)]
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765591580, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 13068327, "oldest_snapshot_seqno": -1}
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.665 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.667 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4147MB free_disk=20.850452423095703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.667 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.668 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 9609 keys, 12919122 bytes, temperature: kUnknown
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765695822, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 12919122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12856533, "index_size": 37452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 254055, "raw_average_key_size": 26, "raw_value_size": 12687088, "raw_average_value_size": 1320, "num_data_blocks": 1427, "num_entries": 9609, "num_filter_entries": 9609, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.696839) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12919122 bytes
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.698853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.3 rd, 123.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.1) write-amplify(7.5) OK, records in: 10147, records dropped: 538 output_compression: NoCompression
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.698884) EVENT_LOG_v1 {"time_micros": 1764405765698871, "job": 84, "event": "compaction_finished", "compaction_time_micros": 104302, "compaction_time_cpu_micros": 41103, "output_level": 6, "num_output_files": 1, "total_output_size": 12919122, "num_input_records": 10147, "num_output_records": 9609, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765700004, "job": 84, "event": "table_file_deletion", "file_number": 134}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765703060, "job": 84, "event": "table_file_deletion", "file_number": 132}
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.591468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.703197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.703202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.703204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.703206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:42:45.703208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.758 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.758 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.785 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:45 np0005539552 nova_compute[233724]: 2025-11-29 08:42:45.895 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:45.894 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:45 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:45.896 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:42:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/305714335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:46 np0005539552 nova_compute[233724]: 2025-11-29 08:42:46.274 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:46 np0005539552 nova_compute[233724]: 2025-11-29 08:42:46.282 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:46 np0005539552 nova_compute[233724]: 2025-11-29 08:42:46.298 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:46 np0005539552 nova_compute[233724]: 2025-11-29 08:42:46.320 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:42:46 np0005539552 nova_compute[233724]: 2025-11-29 08:42:46.321 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:46 np0005539552 nova_compute[233724]: 2025-11-29 08:42:46.436 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:46.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:47 np0005539552 nova_compute[233724]: 2025-11-29 08:42:47.367 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:48 np0005539552 nova_compute[233724]: 2025-11-29 08:42:48.212 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:49.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:42:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234268336' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:42:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:42:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234268336' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:42:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:50.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:51.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:51 np0005539552 nova_compute[233724]: 2025-11-29 08:42:51.438 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:42:51.900 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:52 np0005539552 nova_compute[233724]: 2025-11-29 08:42:52.322 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:52 np0005539552 nova_compute[233724]: 2025-11-29 08:42:52.323 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:42:52 np0005539552 nova_compute[233724]: 2025-11-29 08:42:52.368 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:52.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:52 np0005539552 nova_compute[233724]: 2025-11-29 08:42:52.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:53 np0005539552 nova_compute[233724]: 2025-11-29 08:42:53.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:55 np0005539552 nova_compute[233724]: 2025-11-29 08:42:55.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:56 np0005539552 nova_compute[233724]: 2025-11-29 08:42:56.408 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405761.4066849, d84a455a-c2a7-42a9-a238-d9d04a06df6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:56 np0005539552 nova_compute[233724]: 2025-11-29 08:42:56.408 233728 INFO nova.compute.manager [-] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:56 np0005539552 nova_compute[233724]: 2025-11-29 08:42:56.441 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539552 nova_compute[233724]: 2025-11-29 08:42:56.460 233728 DEBUG nova.compute.manager [None req-a591d1ef-ab28-4f3f-941f-43c0c1bf5567 - - - - - -] [instance: d84a455a-c2a7-42a9-a238-d9d04a06df6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:56.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:56 np0005539552 nova_compute[233724]: 2025-11-29 08:42:56.920 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:56 np0005539552 nova_compute[233724]: 2025-11-29 08:42:56.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:57 np0005539552 nova_compute[233724]: 2025-11-29 08:42:57.371 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:57 np0005539552 nova_compute[233724]: 2025-11-29 08:42:57.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:58.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:42:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:00 np0005539552 nova_compute[233724]: 2025-11-29 08:43:00.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:00 np0005539552 nova_compute[233724]: 2025-11-29 08:43:00.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:43:00 np0005539552 nova_compute[233724]: 2025-11-29 08:43:00.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:43:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:01 np0005539552 nova_compute[233724]: 2025-11-29 08:43:01.121 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:43:01 np0005539552 nova_compute[233724]: 2025-11-29 08:43:01.443 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:02 np0005539552 nova_compute[233724]: 2025-11-29 08:43:02.373 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:03.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:06 np0005539552 nova_compute[233724]: 2025-11-29 08:43:06.445 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:06.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:07.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:07 np0005539552 nova_compute[233724]: 2025-11-29 08:43:07.375 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:09.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:09 np0005539552 nova_compute[233724]: 2025-11-29 08:43:09.118 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:10.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:11 np0005539552 podman[316139]: 2025-11-29 08:43:11.004487018 +0000 UTC m=+0.079683205 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:43:11 np0005539552 podman[316138]: 2025-11-29 08:43:11.021954319 +0000 UTC m=+0.097873716 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:43:11 np0005539552 podman[316140]: 2025-11-29 08:43:11.044440344 +0000 UTC m=+0.117671519 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:43:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:11 np0005539552 nova_compute[233724]: 2025-11-29 08:43:11.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539552 nova_compute[233724]: 2025-11-29 08:43:12.378 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:12.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:15.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:16 np0005539552 nova_compute[233724]: 2025-11-29 08:43:16.449 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:17 np0005539552 nova_compute[233724]: 2025-11-29 08:43:17.380 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:18.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:20.649 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:21 np0005539552 nova_compute[233724]: 2025-11-29 08:43:21.450 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.227 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.228 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.246 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.327 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.327 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.334 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.335 233728 INFO nova.compute.claims [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.382 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.438 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:22 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2418677118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:22.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.889 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.899 233728 DEBUG nova.compute.provider_tree [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.940 233728 DEBUG nova.scheduler.client.report [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.967 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:22 np0005539552 nova_compute[233724]: 2025-11-29 08:43:22.968 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.022 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.023 233728 DEBUG nova.network.neutron [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.048 233728 INFO nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.076 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:43:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.210 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.212 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.213 233728 INFO nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Creating image(s)#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.251 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.289 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.328 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.333 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.429 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.431 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.432 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.433 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.468 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.473 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:23 np0005539552 nova_compute[233724]: 2025-11-29 08:43:23.515 233728 DEBUG nova.policy [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:43:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.150 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.223 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.470 233728 DEBUG nova.objects.instance [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a9139ee-300c-4b0c-897f-218b8cde7e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.484 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.484 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Ensure instance console log exists: /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.484 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.485 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.485 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:24 np0005539552 nova_compute[233724]: 2025-11-29 08:43:24.544 233728 DEBUG nova.network.neutron [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Successfully created port: ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:43:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:24.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.647 233728 DEBUG nova.network.neutron [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Successfully updated port: ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.672 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.672 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.672 233728 DEBUG nova.network.neutron [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.768 233728 DEBUG nova.compute.manager [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.769 233728 DEBUG nova.compute.manager [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing instance network info cache due to event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.769 233728 DEBUG oslo_concurrency.lockutils [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:25 np0005539552 nova_compute[233724]: 2025-11-29 08:43:25.855 233728 DEBUG nova.network.neutron [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:26.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.921 233728 DEBUG nova.network.neutron [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.943 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.944 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Instance network_info: |[{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.944 233728 DEBUG oslo_concurrency.lockutils [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.945 233728 DEBUG nova.network.neutron [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.950 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Start _get_guest_xml network_info=[{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.957 233728 WARNING nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.965 233728 DEBUG nova.virt.libvirt.host [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.965 233728 DEBUG nova.virt.libvirt.host [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.969 233728 DEBUG nova.virt.libvirt.host [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.970 233728 DEBUG nova.virt.libvirt.host [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.973 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.973 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.974 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.974 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.975 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.975 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.975 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.976 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.976 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.976 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.977 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.977 233728 DEBUG nova.virt.hardware [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:26 np0005539552 nova_compute[233724]: 2025-11-29 08:43:26.983 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:27 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Nov 29 03:43:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:27 np0005539552 nova_compute[233724]: 2025-11-29 08:43:27.384 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4203819229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:27 np0005539552 nova_compute[233724]: 2025-11-29 08:43:27.501 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:27 np0005539552 nova_compute[233724]: 2025-11-29 08:43:27.533 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:27 np0005539552 nova_compute[233724]: 2025-11-29 08:43:27.537 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1624730267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.099 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.102 233728 DEBUG nova.virt.libvirt.vif [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-271286906',display_name='tempest-TestNetworkBasicOps-server-271286906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-271286906',id=201,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/6bzZlSe8/jL2wzb/LbfFMoJ0Go2sJh9V7k/5pnGSkLdFEr6m7Swr+/JX1lcABMOTiHa+dqs7Tbn8uamcNOWXDhw/Wnug8RQEdhnNhkG5pMjJwrIrNrkC9cj1YcSvhqw==',key_name='tempest-TestNetworkBasicOps-107713680',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-x774dwg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:23Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=3a9139ee-300c-4b0c-897f-218b8cde7e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.103 233728 DEBUG nova.network.os_vif_util [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.105 233728 DEBUG nova.network.os_vif_util [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.107 233728 DEBUG nova.objects.instance [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a9139ee-300c-4b0c-897f-218b8cde7e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.139 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <uuid>3a9139ee-300c-4b0c-897f-218b8cde7e38</uuid>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <name>instance-000000c9</name>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestNetworkBasicOps-server-271286906</nova:name>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:43:26</nova:creationTime>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <nova:port uuid="ffc8d5ab-9373-4bc4-b60e-cde39ecf014f">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <entry name="serial">3a9139ee-300c-4b0c-897f-218b8cde7e38</entry>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <entry name="uuid">3a9139ee-300c-4b0c-897f-218b8cde7e38</entry>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/3a9139ee-300c-4b0c-897f-218b8cde7e38_disk">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/3a9139ee-300c-4b0c-897f-218b8cde7e38_disk.config">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:cc:46:19"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <target dev="tapffc8d5ab-93"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/console.log" append="off"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:43:28 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:43:28 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:43:28 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:43:28 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.141 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Preparing to wait for external event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.141 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.142 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.142 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.144 233728 DEBUG nova.virt.libvirt.vif [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-271286906',display_name='tempest-TestNetworkBasicOps-server-271286906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-271286906',id=201,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/6bzZlSe8/jL2wzb/LbfFMoJ0Go2sJh9V7k/5pnGSkLdFEr6m7Swr+/JX1lcABMOTiHa+dqs7Tbn8uamcNOWXDhw/Wnug8RQEdhnNhkG5pMjJwrIrNrkC9cj1YcSvhqw==',key_name='tempest-TestNetworkBasicOps-107713680',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-x774dwg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:23Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=3a9139ee-300c-4b0c-897f-218b8cde7e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.144 233728 DEBUG nova.network.os_vif_util [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.145 233728 DEBUG nova.network.os_vif_util [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.146 233728 DEBUG os_vif [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.147 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.148 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.149 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.154 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.154 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffc8d5ab-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.155 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffc8d5ab-93, col_values=(('external_ids', {'iface-id': 'ffc8d5ab-9373-4bc4-b60e-cde39ecf014f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:46:19', 'vm-uuid': '3a9139ee-300c-4b0c-897f-218b8cde7e38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:28 np0005539552 NetworkManager[48926]: <info>  [1764405808.1594] manager: (tapffc8d5ab-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.158 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.162 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.170 233728 INFO os_vif [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93')#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.228 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.228 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.229 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:cc:46:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.229 233728 INFO nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Using config drive#033[00m
Nov 29 03:43:28 np0005539552 nova_compute[233724]: 2025-11-29 08:43:28.262 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:28.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.705 233728 INFO nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Creating config drive at /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/disk.config#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.715 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5zofncj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.880 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5zofncj7" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.917 233728 DEBUG nova.storage.rbd_utils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.922 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/disk.config 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.954 233728 DEBUG nova.network.neutron [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updated VIF entry in instance network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.955 233728 DEBUG nova.network.neutron [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:29 np0005539552 nova_compute[233724]: 2025-11-29 08:43:29.981 233728 DEBUG oslo_concurrency.lockutils [req-685ffc81-a573-407f-a01a-5b6fa7de4f1d req-7939f3ae-8dd1-4a0a-80e7-1a8710d71d17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.140 233728 DEBUG oslo_concurrency.processutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/disk.config 3a9139ee-300c-4b0c-897f-218b8cde7e38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.141 233728 INFO nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Deleting local config drive /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38/disk.config because it was imported into RBD.#033[00m
Nov 29 03:43:30 np0005539552 kernel: tapffc8d5ab-93: entered promiscuous mode
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.221 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:30Z|00893|binding|INFO|Claiming lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for this chassis.
Nov 29 03:43:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:30Z|00894|binding|INFO|ffc8d5ab-9373-4bc4-b60e-cde39ecf014f: Claiming fa:16:3e:cc:46:19 10.100.0.6
Nov 29 03:43:30 np0005539552 NetworkManager[48926]: <info>  [1764405810.2231] manager: (tapffc8d5ab-93): new Tun device (/org/freedesktop/NetworkManager/Devices/396)
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.236 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.243 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:46:19 10.100.0.6'], port_security=['fa:16:3e:cc:46:19 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a9139ee-300c-4b0c-897f-218b8cde7e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643f49e8-7bc0-48db-8ba9-730954386800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=019e6a06-0d9a-4978-9f73-700cb716f877, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.245 143400 INFO neutron.agent.ovn.metadata.agent [-] Port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f in datapath faff47d9-b197-488e-92f7-8e8d5ec1eec7 bound to our chassis#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.248 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network faff47d9-b197-488e-92f7-8e8d5ec1eec7#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.272 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0940b6-5aab-445c-a1a0-5df2d5860c52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.273 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfaff47d9-b1 in ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:30 np0005539552 systemd-machined[196379]: New machine qemu-91-instance-000000c9.
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.279 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfaff47d9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.280 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[eb553d0e-aebf-4797-b232-51c40863014a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.281 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae2ac66-40e1-4913-b5b1-f44d896f0d41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.302 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[8e40b6bd-6acd-4d88-b623-045772121a93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 systemd[1]: Started Virtual Machine qemu-91-instance-000000c9.
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.332 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:30Z|00895|binding|INFO|Setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f ovn-installed in OVS
Nov 29 03:43:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:30Z|00896|binding|INFO|Setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f up in Southbound
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.337 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.339 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbefa6a-b208-4d54-aa5f-d30494e85fe1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 systemd-udevd[316593]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:30 np0005539552 NetworkManager[48926]: <info>  [1764405810.3674] device (tapffc8d5ab-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:30 np0005539552 NetworkManager[48926]: <info>  [1764405810.3703] device (tapffc8d5ab-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.382 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[658a7d5b-2c6c-4d7a-babb-305caaf1918c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.391 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5e656c5b-b28a-46c2-af36-bcfbfaa520e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 NetworkManager[48926]: <info>  [1764405810.3923] manager: (tapfaff47d9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/397)
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.446 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[15b9959d-20da-40d2-bc7b-aeb430e800f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.449 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[521bf302-14a4-4ee0-8bc3-5f7c5c336044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 NetworkManager[48926]: <info>  [1764405810.4913] device (tapfaff47d9-b0): carrier: link connected
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.508 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4b93c9-997f-4b1c-9836-c23cca457981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.537 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2739a9d1-aa68-4482-87b2-d9aab4bc352d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaff47d9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885617, 'reachable_time': 43218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316623, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.563 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[186c6f6f-b024-4a03-b5b3-8a4a8a37ba26]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:20e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 885617, 'tstamp': 885617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316624, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.583 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[27edf7ae-070b-4798-8f9c-9d6e370f29f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaff47d9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885617, 'reachable_time': 43218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316625, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.602 233728 DEBUG nova.compute.manager [req-c366e20d-73ad-453e-8431-7e5ccde6b6c4 req-87e79521-3cbd-4ee7-869f-951fccd2122e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.602 233728 DEBUG oslo_concurrency.lockutils [req-c366e20d-73ad-453e-8431-7e5ccde6b6c4 req-87e79521-3cbd-4ee7-869f-951fccd2122e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.603 233728 DEBUG oslo_concurrency.lockutils [req-c366e20d-73ad-453e-8431-7e5ccde6b6c4 req-87e79521-3cbd-4ee7-869f-951fccd2122e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.603 233728 DEBUG oslo_concurrency.lockutils [req-c366e20d-73ad-453e-8431-7e5ccde6b6c4 req-87e79521-3cbd-4ee7-869f-951fccd2122e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.604 233728 DEBUG nova.compute.manager [req-c366e20d-73ad-453e-8431-7e5ccde6b6c4 req-87e79521-3cbd-4ee7-869f-951fccd2122e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Processing event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.632 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4503f9e9-bd5f-442b-9360-1e1d979541d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.721 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f8ad9c-2fea-438e-88d7-512edd63dadd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.722 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaff47d9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.723 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.724 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaff47d9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:30 np0005539552 NetworkManager[48926]: <info>  [1764405810.7277] manager: (tapfaff47d9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Nov 29 03:43:30 np0005539552 kernel: tapfaff47d9-b0: entered promiscuous mode
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.728 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.742 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfaff47d9-b0, col_values=(('external_ids', {'iface-id': '9a08cd83-c19d-4e8b-b8f6-ce380a01cac2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:30Z|00897|binding|INFO|Releasing lport 9a08cd83-c19d-4e8b-b8f6-ce380a01cac2 from this chassis (sb_readonly=0)
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.748 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faff47d9-b197-488e-92f7-8e8d5ec1eec7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faff47d9-b197-488e-92f7-8e8d5ec1eec7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.749 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[be058bed-a7a6-437d-88e8-b99d08114eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.751 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-faff47d9-b197-488e-92f7-8e8d5ec1eec7
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/faff47d9-b197-488e-92f7-8e8d5ec1eec7.pid.haproxy
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID faff47d9-b197-488e-92f7-8e8d5ec1eec7
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:30.752 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'env', 'PROCESS_TAG=haproxy-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/faff47d9-b197-488e-92f7-8e8d5ec1eec7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.755 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 nova_compute[233724]: 2025-11-29 08:43:30.781 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.097 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405811.0967214, 3a9139ee-300c-4b0c-897f-218b8cde7e38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.098 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.101 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.107 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.112 233728 INFO nova.virt.libvirt.driver [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Instance spawned successfully.#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.115 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.119 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.126 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.143 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.143 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.144 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.145 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.145 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.146 233728 DEBUG nova.virt.libvirt.driver [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.154 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.154 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405811.0970442, 3a9139ee-300c-4b0c-897f-218b8cde7e38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.155 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.190 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.195 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405811.1058524, 3a9139ee-300c-4b0c-897f-218b8cde7e38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.195 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.216 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.220 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.223 233728 INFO nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Took 8.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.224 233728 DEBUG nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:31 np0005539552 podman[316700]: 2025-11-29 08:43:31.247058156 +0000 UTC m=+0.080836657 container create aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.263 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:31 np0005539552 podman[316700]: 2025-11-29 08:43:31.210440581 +0000 UTC m=+0.044219092 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:31 np0005539552 systemd[1]: Started libpod-conmon-aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f.scope.
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.322 233728 INFO nova.compute.manager [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Took 9.02 seconds to build instance.#033[00m
Nov 29 03:43:31 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:43:31 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4f1f7fbce742442dc6c0ce1b76df61a4f463db1c6cc6dd9852a81522d6d1bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:31 np0005539552 nova_compute[233724]: 2025-11-29 08:43:31.360 233728 DEBUG oslo_concurrency.lockutils [None req-f0a2f3b3-40f0-48e6-a1e0-457a3d6d4dc3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:31 np0005539552 podman[316700]: 2025-11-29 08:43:31.377058336 +0000 UTC m=+0.210836887 container init aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:43:31 np0005539552 podman[316700]: 2025-11-29 08:43:31.389680195 +0000 UTC m=+0.223458696 container start aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:43:31 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [NOTICE]   (316719) : New worker (316721) forked
Nov 29 03:43:31 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [NOTICE]   (316719) : Loading success.
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.387 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.739 233728 DEBUG nova.compute.manager [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.739 233728 DEBUG oslo_concurrency.lockutils [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.740 233728 DEBUG oslo_concurrency.lockutils [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.740 233728 DEBUG oslo_concurrency.lockutils [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.741 233728 DEBUG nova.compute.manager [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:32 np0005539552 nova_compute[233724]: 2025-11-29 08:43:32.741 233728 WARNING nova.compute.manager [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:43:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:32.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:33 np0005539552 nova_compute[233724]: 2025-11-29 08:43:33.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:35.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:37.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:37 np0005539552 nova_compute[233724]: 2025-11-29 08:43:37.390 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:38 np0005539552 NetworkManager[48926]: <info>  [1764405818.0824] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 29 03:43:38 np0005539552 NetworkManager[48926]: <info>  [1764405818.0835] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.081 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.163 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.377 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:38 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:38Z|00898|binding|INFO|Releasing lport 9a08cd83-c19d-4e8b-b8f6-ce380a01cac2 from this chassis (sb_readonly=0)
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.401 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.470 233728 DEBUG nova.compute.manager [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.471 233728 DEBUG nova.compute.manager [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing instance network info cache due to event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.471 233728 DEBUG oslo_concurrency.lockutils [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.471 233728 DEBUG oslo_concurrency.lockutils [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:38 np0005539552 nova_compute[233724]: 2025-11-29 08:43:38.472 233728 DEBUG nova.network.neutron [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:43:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1909201040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:43:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:43:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1909201040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:43:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:40.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:41 np0005539552 nova_compute[233724]: 2025-11-29 08:43:41.268 233728 DEBUG nova.network.neutron [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updated VIF entry in instance network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:41 np0005539552 nova_compute[233724]: 2025-11-29 08:43:41.270 233728 DEBUG nova.network.neutron [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:41 np0005539552 nova_compute[233724]: 2025-11-29 08:43:41.297 233728 DEBUG oslo_concurrency.lockutils [req-cf73a8a6-a9d6-43fd-9769-7b6d9600e089 req-a7e2dd49-cb0e-44b6-ab09-de405a183adf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:42 np0005539552 podman[316741]: 2025-11-29 08:43:42.010938243 +0000 UTC m=+0.094803903 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:43:42 np0005539552 podman[316740]: 2025-11-29 08:43:42.015332501 +0000 UTC m=+0.092094660 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:43:42 np0005539552 podman[316742]: 2025-11-29 08:43:42.037179999 +0000 UTC m=+0.109056496 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:43:42 np0005539552 nova_compute[233724]: 2025-11-29 08:43:42.393 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:42.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:43 np0005539552 nova_compute[233724]: 2025-11-29 08:43:43.165 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:44Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:46:19 10.100.0.6
Nov 29 03:43:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:44Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:46:19 10.100.0.6
Nov 29 03:43:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:44.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:44 np0005539552 nova_compute[233724]: 2025-11-29 08:43:44.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:44 np0005539552 nova_compute[233724]: 2025-11-29 08:43:44.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:44 np0005539552 nova_compute[233724]: 2025-11-29 08:43:44.956 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:44 np0005539552 nova_compute[233724]: 2025-11-29 08:43:44.956 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:44 np0005539552 nova_compute[233724]: 2025-11-29 08:43:44.957 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:43:44 np0005539552 nova_compute[233724]: 2025-11-29 08:43:44.957 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3771839287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.410 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.488 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.489 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.715 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.717 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3978MB free_disk=20.897357940673828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.717 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.717 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.784 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 3a9139ee-300c-4b0c-897f-218b8cde7e38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.785 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.785 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:43:45 np0005539552 nova_compute[233724]: 2025-11-29 08:43:45.828 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2826183560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:46 np0005539552 nova_compute[233724]: 2025-11-29 08:43:46.276 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:46 np0005539552 nova_compute[233724]: 2025-11-29 08:43:46.284 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:46 np0005539552 nova_compute[233724]: 2025-11-29 08:43:46.303 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:46 np0005539552 nova_compute[233724]: 2025-11-29 08:43:46.332 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:43:46 np0005539552 nova_compute[233724]: 2025-11-29 08:43:46.332 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:46.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:47.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:47 np0005539552 nova_compute[233724]: 2025-11-29 08:43:47.396 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:43:47Z|00899|binding|INFO|Releasing lport 9a08cd83-c19d-4e8b-b8f6-ce380a01cac2 from this chassis (sb_readonly=0)
Nov 29 03:43:47 np0005539552 nova_compute[233724]: 2025-11-29 08:43:47.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:48 np0005539552 nova_compute[233724]: 2025-11-29 08:43:48.167 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:49.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:50.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:50 np0005539552 nova_compute[233724]: 2025-11-29 08:43:50.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:51 np0005539552 nova_compute[233724]: 2025-11-29 08:43:51.300 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:51.300 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:51.302 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:43:52 np0005539552 nova_compute[233724]: 2025-11-29 08:43:52.333 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:52 np0005539552 nova_compute[233724]: 2025-11-29 08:43:52.334 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:43:52 np0005539552 nova_compute[233724]: 2025-11-29 08:43:52.399 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:53 np0005539552 nova_compute[233724]: 2025-11-29 08:43:53.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:53 np0005539552 nova_compute[233724]: 2025-11-29 08:43:53.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:55.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:55 np0005539552 nova_compute[233724]: 2025-11-29 08:43:55.704 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:55 np0005539552 nova_compute[233724]: 2025-11-29 08:43:55.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:55 np0005539552 nova_compute[233724]: 2025-11-29 08:43:55.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:57 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:43:57.305 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:57 np0005539552 nova_compute[233724]: 2025-11-29 08:43:57.402 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:57 np0005539552 nova_compute[233724]: 2025-11-29 08:43:57.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:57 np0005539552 nova_compute[233724]: 2025-11-29 08:43:57.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:58 np0005539552 nova_compute[233724]: 2025-11-29 08:43:58.171 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:58 np0005539552 nova_compute[233724]: 2025-11-29 08:43:58.920 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:58 np0005539552 nova_compute[233724]: 2025-11-29 08:43:58.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:43:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:59.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:00.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:00 np0005539552 nova_compute[233724]: 2025-11-29 08:44:00.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:00 np0005539552 nova_compute[233724]: 2025-11-29 08:44:00.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:44:00 np0005539552 nova_compute[233724]: 2025-11-29 08:44:00.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:44:01 np0005539552 nova_compute[233724]: 2025-11-29 08:44:01.129 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:01 np0005539552 nova_compute[233724]: 2025-11-29 08:44:01.129 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:01 np0005539552 nova_compute[233724]: 2025-11-29 08:44:01.130 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:44:01 np0005539552 nova_compute[233724]: 2025-11-29 08:44:01.130 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a9139ee-300c-4b0c-897f-218b8cde7e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:01.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:02 np0005539552 nova_compute[233724]: 2025-11-29 08:44:02.405 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:02.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:03.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:03 np0005539552 nova_compute[233724]: 2025-11-29 08:44:03.173 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:03 np0005539552 nova_compute[233724]: 2025-11-29 08:44:03.399 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:03 np0005539552 nova_compute[233724]: 2025-11-29 08:44:03.417 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:03 np0005539552 nova_compute[233724]: 2025-11-29 08:44:03.418 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:44:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:04.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:07.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:07 np0005539552 nova_compute[233724]: 2025-11-29 08:44:07.407 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:08 np0005539552 nova_compute[233724]: 2025-11-29 08:44:08.175 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539552 nova_compute[233724]: 2025-11-29 08:44:12.411 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:44:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:44:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:13 np0005539552 podman[317096]: 2025-11-29 08:44:13.019994268 +0000 UTC m=+0.100617860 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:44:13 np0005539552 podman[317097]: 2025-11-29 08:44:13.032158425 +0000 UTC m=+0.109729875 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:44:13 np0005539552 podman[317098]: 2025-11-29 08:44:13.053299004 +0000 UTC m=+0.123380222 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:44:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:13.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:13 np0005539552 nova_compute[233724]: 2025-11-29 08:44:13.176 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:16.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:17 np0005539552 nova_compute[233724]: 2025-11-29 08:44:17.413 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:18 np0005539552 nova_compute[233724]: 2025-11-29 08:44:18.179 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:19.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:20.650 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:20.651 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:22 np0005539552 nova_compute[233724]: 2025-11-29 08:44:22.417 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:22 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 03:44:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:23 np0005539552 nova_compute[233724]: 2025-11-29 08:44:23.181 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:24.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:25.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.709021) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866709139, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1334, "num_deletes": 251, "total_data_size": 2882030, "memory_usage": 2920376, "flush_reason": "Manual Compaction"}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866725906, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 1879311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66934, "largest_seqno": 68263, "table_properties": {"data_size": 1873521, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13007, "raw_average_key_size": 20, "raw_value_size": 1861649, "raw_average_value_size": 2908, "num_data_blocks": 137, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405765, "oldest_key_time": 1764405765, "file_creation_time": 1764405866, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 16953 microseconds, and 8784 cpu microseconds.
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.725980) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 1879311 bytes OK
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.726007) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.728173) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.728196) EVENT_LOG_v1 {"time_micros": 1764405866728189, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.728219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2875657, prev total WAL file size 2875657, number of live WAL files 2.
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.729592) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(1835KB)], [135(12MB)]
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866729676, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14798433, "oldest_snapshot_seqno": -1}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9728 keys, 12856309 bytes, temperature: kUnknown
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866838403, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12856309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12793067, "index_size": 37849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 257384, "raw_average_key_size": 26, "raw_value_size": 12621627, "raw_average_value_size": 1297, "num_data_blocks": 1439, "num_entries": 9728, "num_filter_entries": 9728, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764405866, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.838790) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12856309 bytes
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.840463) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.9 rd, 118.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.3 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 10249, records dropped: 521 output_compression: NoCompression
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.840494) EVENT_LOG_v1 {"time_micros": 1764405866840480, "job": 86, "event": "compaction_finished", "compaction_time_micros": 108862, "compaction_time_cpu_micros": 56647, "output_level": 6, "num_output_files": 1, "total_output_size": 12856309, "num_input_records": 10249, "num_output_records": 9728, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866841788, "job": 86, "event": "table_file_deletion", "file_number": 137}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866847088, "job": 86, "event": "table_file_deletion", "file_number": 135}
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.729455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.847304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.847314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.847318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.847323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:44:26.847328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:27 np0005539552 nova_compute[233724]: 2025-11-29 08:44:27.420 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 03:44:28 np0005539552 nova_compute[233724]: 2025-11-29 08:44:28.183 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:29.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:30.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:31.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.423 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.618 233728 INFO nova.compute.manager [None req-ea90a5c7-7ed8-49e7-8451-2ac378917dea 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Get console output#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.628 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.833 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.833 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.848 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.931 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.931 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.941 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:44:32 np0005539552 nova_compute[233724]: 2025-11-29 08:44:32.941 233728 INFO nova.compute.claims [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:44:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:32.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.056 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.185 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:33.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/110293649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.506 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.516 233728 DEBUG nova.compute.provider_tree [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.540 233728 DEBUG nova.scheduler.client.report [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.563 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.564 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.633 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.634 233728 DEBUG nova.network.neutron [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.659 233728 INFO nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.667 233728 DEBUG nova.compute.manager [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.668 233728 DEBUG nova.compute.manager [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing instance network info cache due to event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.669 233728 DEBUG oslo_concurrency.lockutils [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.669 233728 DEBUG oslo_concurrency.lockutils [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.670 233728 DEBUG nova.network.neutron [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.694 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.723 233728 DEBUG nova.compute.manager [req-0491f92e-128f-4e64-bab5-12496df76be0 req-ba0ac244-3c32-40ce-bbfd-7e16e247c7ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.724 233728 DEBUG oslo_concurrency.lockutils [req-0491f92e-128f-4e64-bab5-12496df76be0 req-ba0ac244-3c32-40ce-bbfd-7e16e247c7ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.725 233728 DEBUG oslo_concurrency.lockutils [req-0491f92e-128f-4e64-bab5-12496df76be0 req-ba0ac244-3c32-40ce-bbfd-7e16e247c7ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.725 233728 DEBUG oslo_concurrency.lockutils [req-0491f92e-128f-4e64-bab5-12496df76be0 req-ba0ac244-3c32-40ce-bbfd-7e16e247c7ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.726 233728 DEBUG nova.compute.manager [req-0491f92e-128f-4e64-bab5-12496df76be0 req-ba0ac244-3c32-40ce-bbfd-7e16e247c7ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.726 233728 WARNING nova.compute.manager [req-0491f92e-128f-4e64-bab5-12496df76be0 req-ba0ac244-3c32-40ce-bbfd-7e16e247c7ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.808 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.810 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.811 233728 INFO nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Creating image(s)#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.851 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.885 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.917 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.920 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "ddf838527dc45a22f443bf4d02e84183baa61858" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.921 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "ddf838527dc45a22f443bf4d02e84183baa61858" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:33 np0005539552 nova_compute[233724]: 2025-11-29 08:44:33.929 233728 DEBUG nova.policy [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ac9cfdaf51b4a5ab874f7e3571f88a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '66ee3b60fb89476383201ba204858d4d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.162 233728 DEBUG nova.virt.libvirt.imagebackend [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/19c04997-1e5f-42a9-90ff-a53c93a49ed0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/19c04997-1e5f-42a9-90ff-a53c93a49ed0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.251 233728 DEBUG nova.virt.libvirt.imagebackend [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/19c04997-1e5f-42a9-90ff-a53c93a49ed0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.252 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] cloning images/19c04997-1e5f-42a9-90ff-a53c93a49ed0@snap to None/4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.421 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "ddf838527dc45a22f443bf4d02e84183baa61858" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.639 233728 DEBUG nova.objects.instance [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lazy-loading 'migration_context' on Instance uuid 4208aeda-c433-4ac5-8312-fb09ac9f4f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.656 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.656 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Ensure instance console log exists: /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.657 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.657 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.657 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.696 233728 INFO nova.compute.manager [None req-cb55a2b6-002c-45a8-9bfe-a1fe2fe52913 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Get console output#033[00m
Nov 29 03:44:34 np0005539552 nova_compute[233724]: 2025-11-29 08:44:34.701 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:44:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:34.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 03:44:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:35.623 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.623 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:35 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:35.625 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.756 233728 DEBUG nova.network.neutron [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Successfully created port: 5e109659-0e3c-4696-9678-2c5962204b98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.818 233728 DEBUG nova.compute.manager [req-0191ab75-ab73-4e99-b0ce-22c621bdf62b req-239865a6-72e6-458e-a911-048bd6b4fb6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.819 233728 DEBUG oslo_concurrency.lockutils [req-0191ab75-ab73-4e99-b0ce-22c621bdf62b req-239865a6-72e6-458e-a911-048bd6b4fb6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.819 233728 DEBUG oslo_concurrency.lockutils [req-0191ab75-ab73-4e99-b0ce-22c621bdf62b req-239865a6-72e6-458e-a911-048bd6b4fb6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.820 233728 DEBUG oslo_concurrency.lockutils [req-0191ab75-ab73-4e99-b0ce-22c621bdf62b req-239865a6-72e6-458e-a911-048bd6b4fb6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.820 233728 DEBUG nova.compute.manager [req-0191ab75-ab73-4e99-b0ce-22c621bdf62b req-239865a6-72e6-458e-a911-048bd6b4fb6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:35 np0005539552 nova_compute[233724]: 2025-11-29 08:44:35.821 233728 WARNING nova.compute.manager [req-0191ab75-ab73-4e99-b0ce-22c621bdf62b req-239865a6-72e6-458e-a911-048bd6b4fb6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.000 233728 DEBUG nova.network.neutron [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updated VIF entry in instance network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.001 233728 DEBUG nova.network.neutron [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.028 233728 DEBUG oslo_concurrency.lockutils [req-24848562-30b1-4cdf-92d2-94fa8f0641c3 req-ab14c9eb-6440-4442-b09b-a73de568968f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.464 233728 DEBUG nova.network.neutron [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Successfully updated port: 5e109659-0e3c-4696-9678-2c5962204b98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.478 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.479 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquired lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.479 233728 DEBUG nova.network.neutron [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.578 233728 INFO nova.compute.manager [None req-5a3f6260-f6b1-49b2-8e79-09673fab3dba 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Get console output#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.583 279702 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.683 233728 DEBUG nova.network.neutron [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.698 233728 DEBUG nova.compute.manager [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-changed-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.698 233728 DEBUG nova.compute.manager [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Refreshing instance network info cache due to event network-changed-5e109659-0e3c-4696-9678-2c5962204b98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:36 np0005539552 nova_compute[233724]: 2025-11-29 08:44:36.698 233728 DEBUG oslo_concurrency.lockutils [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:36.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:37.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.426 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.449 233728 DEBUG nova.network.neutron [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updating instance_info_cache with network_info: [{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.470 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Releasing lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.471 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Instance network_info: |[{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.471 233728 DEBUG oslo_concurrency.lockutils [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.472 233728 DEBUG nova.network.neutron [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Refreshing network info cache for port 5e109659-0e3c-4696-9678-2c5962204b98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.477 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Start _get_guest_xml network_info=[{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:44:24Z,direct_url=<?>,disk_format='raw',id=19c04997-1e5f-42a9-90ff-a53c93a49ed0,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1064327949',owner='66ee3b60fb89476383201ba204858d4d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:44:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '19c04997-1e5f-42a9-90ff-a53c93a49ed0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.485 233728 WARNING nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.494 233728 DEBUG nova.virt.libvirt.host [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.495 233728 DEBUG nova.virt.libvirt.host [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.501 233728 DEBUG nova.virt.libvirt.host [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.502 233728 DEBUG nova.virt.libvirt.host [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.504 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.504 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:44:24Z,direct_url=<?>,disk_format='raw',id=19c04997-1e5f-42a9-90ff-a53c93a49ed0,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1064327949',owner='66ee3b60fb89476383201ba204858d4d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:44:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.505 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.506 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.506 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.506 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.507 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.507 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.508 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.508 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.509 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.510 233728 DEBUG nova.virt.hardware [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.515 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.964 233728 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.965 233728 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing instance network info cache due to event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.966 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.966 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:37 np0005539552 nova_compute[233724]: 2025-11-29 08:44:37.967 233728 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3167465566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.005 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.050 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.057 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.188 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1929266264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.518 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.521 233728 DEBUG nova.virt.libvirt.vif [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-767379208',display_name='tempest-TestSnapshotPattern-server-767379208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-767379208',id=206,image_ref='19c04997-1e5f-42a9-90ff-a53c93a49ed0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvmAjf7VDN0MoYAmHVgdY8l8+1v5wjwJNh4fBpCc/IwM7etIRNnxNIuXJ33y4wtb07HCVtVAHbkNdZ/qEkgOQyG3Oc8WVN/7z3fiAu47wM+5lJvW0Y+dOBmLvwkMU2fbA==',key_name='tempest-TestSnapshotPattern-945427268',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66ee3b60fb89476383201ba204858d4d',ramdisk_id='',reservation_id='r-0d89fxqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='da14f334-c7fe-428d-b6d2-32c2f4cc4054',image_min_disk='1',image_min_ram='0',image_owner_id='66ee3b60fb89476383201ba204858d4d',image_owner_project_name='tempest-TestSnapshotPattern-1740419556',image_owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member',image_user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1740419556',owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:33Z,user_data=None,user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',uuid=4208aeda-c433-4ac5-8312-fb09ac9f4f83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.521 233728 DEBUG nova.network.os_vif_util [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converting VIF {"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.523 233728 DEBUG nova.network.os_vif_util [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.525 233728 DEBUG nova.objects.instance [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4208aeda-c433-4ac5-8312-fb09ac9f4f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.546 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <uuid>4208aeda-c433-4ac5-8312-fb09ac9f4f83</uuid>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <name>instance-000000ce</name>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestSnapshotPattern-server-767379208</nova:name>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:44:37</nova:creationTime>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:user uuid="5ac9cfdaf51b4a5ab874f7e3571f88a0">tempest-TestSnapshotPattern-1740419556-project-member</nova:user>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:project uuid="66ee3b60fb89476383201ba204858d4d">tempest-TestSnapshotPattern-1740419556</nova:project>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="19c04997-1e5f-42a9-90ff-a53c93a49ed0"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <nova:port uuid="5e109659-0e3c-4696-9678-2c5962204b98">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <entry name="serial">4208aeda-c433-4ac5-8312-fb09ac9f4f83</entry>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <entry name="uuid">4208aeda-c433-4ac5-8312-fb09ac9f4f83</entry>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk.config">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:88:43:29"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <target dev="tap5e109659-0e"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/console.log" append="off"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:44:38 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:44:38 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:44:38 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:44:38 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.548 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Preparing to wait for external event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.549 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.550 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.551 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.552 233728 DEBUG nova.virt.libvirt.vif [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-767379208',display_name='tempest-TestSnapshotPattern-server-767379208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-767379208',id=206,image_ref='19c04997-1e5f-42a9-90ff-a53c93a49ed0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvmAjf7VDN0MoYAmHVgdY8l8+1v5wjwJNh4fBpCc/IwM7etIRNnxNIuXJ33y4wtb07HCVtVAHbkNdZ/qEkgOQyG3Oc8WVN/7z3fiAu47wM+5lJvW0Y+dOBmLvwkMU2fbA==',key_name='tempest-TestSnapshotPattern-945427268',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66ee3b60fb89476383201ba204858d4d',ramdisk_id='',reservation_id='r-0d89fxqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='da14f334-c7fe-428d-b6d2-32c2f4cc4054',image_min_disk='1',image_min_ram='0',image_owner_id='66ee3b60fb89476383201ba204858d4d',image_owner_project_name='tempest-TestSnapshotPattern-1740419556',image_owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member',image_user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1740419556',owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:33Z,user_data=None,user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',uuid=4208aeda-c433-4ac5-8312-fb09ac9f4f83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.553 233728 DEBUG nova.network.os_vif_util [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converting VIF {"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.555 233728 DEBUG nova.network.os_vif_util [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.556 233728 DEBUG os_vif [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.557 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.558 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.559 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.565 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.566 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e109659-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.567 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e109659-0e, col_values=(('external_ids', {'iface-id': '5e109659-0e3c-4696-9678-2c5962204b98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:43:29', 'vm-uuid': '4208aeda-c433-4ac5-8312-fb09ac9f4f83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:38 np0005539552 NetworkManager[48926]: <info>  [1764405878.5709] manager: (tap5e109659-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.569 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.576 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.579 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.580 233728 INFO os_vif [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e')#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.640 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.641 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.641 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] No VIF found with MAC fa:16:3e:88:43:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.642 233728 INFO nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Using config drive#033[00m
Nov 29 03:44:38 np0005539552 nova_compute[233724]: 2025-11-29 08:44:38.679 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:38.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.152 233728 INFO nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Creating config drive at /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/disk.config#033[00m
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.161 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl944iz92 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:39.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.323 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl944iz92" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.376 233728 DEBUG nova.storage.rbd_utils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] rbd image 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.384 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/disk.config 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.608 233728 DEBUG oslo_concurrency.processutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/disk.config 4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.609 233728 INFO nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Deleting local config drive /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83/disk.config because it was imported into RBD.#033[00m
Nov 29 03:44:39 np0005539552 kernel: tap5e109659-0e: entered promiscuous mode
Nov 29 03:44:39 np0005539552 NetworkManager[48926]: <info>  [1764405879.6887] manager: (tap5e109659-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.695 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:39Z|00900|binding|INFO|Claiming lport 5e109659-0e3c-4696-9678-2c5962204b98 for this chassis.
Nov 29 03:44:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:39Z|00901|binding|INFO|5e109659-0e3c-4696-9678-2c5962204b98: Claiming fa:16:3e:88:43:29 10.100.0.5
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.710 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:43:29 10.100.0.5'], port_security=['fa:16:3e:88:43:29 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4208aeda-c433-4ac5-8312-fb09ac9f4f83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66ee3b60fb89476383201ba204858d4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '291b52bf-18bb-41c3-a977-3e030dbdb988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b579620-af3c-4534-8cbc-29d18f0dd8a7, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5e109659-0e3c-4696-9678-2c5962204b98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.712 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5e109659-0e3c-4696-9678-2c5962204b98 in datapath c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 bound to our chassis#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.716 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c60162ec-f468-4b5f-bd91-89b0a1cb9fa1#033[00m
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.728 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:39Z|00902|binding|INFO|Setting lport 5e109659-0e3c-4696-9678-2c5962204b98 ovn-installed in OVS
Nov 29 03:44:39 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:39Z|00903|binding|INFO|Setting lport 5e109659-0e3c-4696-9678-2c5962204b98 up in Southbound
Nov 29 03:44:39 np0005539552 nova_compute[233724]: 2025-11-29 08:44:39.732 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.735 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a8f76e-fa7d-48cf-b3aa-231fd170efa0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.736 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc60162ec-f1 in ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.738 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc60162ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:44:39 np0005539552 systemd-machined[196379]: New machine qemu-92-instance-000000ce.
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.738 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c9df65e8-49eb-4a57-9151-cdf5733e892b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.740 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0daa93bd-cfa9-448f-bd4c-260bf7de20c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 systemd[1]: Started Virtual Machine qemu-92-instance-000000ce.
Nov 29 03:44:39 np0005539552 systemd-udevd[317608]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.753 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[2d03db35-273b-4d35-8ad8-feba0f5cdd6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 NetworkManager[48926]: <info>  [1764405879.7640] device (tap5e109659-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:44:39 np0005539552 NetworkManager[48926]: <info>  [1764405879.7661] device (tap5e109659-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.772 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9c5e3e-5059-46b4-96f1-aadd81296c0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.818 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[6b63d56e-3b06-4a13-8c8e-a25723dff488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 NetworkManager[48926]: <info>  [1764405879.8276] manager: (tapc60162ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/403)
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.827 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb6f6ec-97f5-41fc-befc-66aad2d86efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 systemd-udevd[317612]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.863 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae87851-d112-4aaa-9ee5-1dae63f0d5ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.867 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd2cd87-0af4-4ebe-9dff-96c4c60ce8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 NetworkManager[48926]: <info>  [1764405879.9022] device (tapc60162ec-f0): carrier: link connected
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.910 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[40af8df9-0660-427f-844d-5e226497d0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.933 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6431afda-3910-4fdb-bf7e-218c1c103b4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc60162ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:33:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892558, 'reachable_time': 29905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317640, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.955 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7db5c050-c933-405a-a9b4-ce0c8b9ddd94]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:33ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 892558, 'tstamp': 892558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317641, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:39 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:39.978 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1f2090-7ace-4889-a415-2adfd4d73c22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc60162ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:33:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892558, 'reachable_time': 29905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317642, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.027 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8063da9b-697c-49c4-8782-27d504a5c1de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.114 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e80052-d052-4630-b581-e8e66bc077ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.115 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc60162ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.116 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.116 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc60162ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.118 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539552 NetworkManager[48926]: <info>  [1764405880.1189] manager: (tapc60162ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Nov 29 03:44:40 np0005539552 kernel: tapc60162ec-f0: entered promiscuous mode
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.122 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc60162ec-f0, col_values=(('external_ids', {'iface-id': '1cbe83e7-a1b1-4865-9743-7778d9db9685'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.124 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:40Z|00904|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.126 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.127 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.130 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[392725e4-f813-4219-8845-0e28ff57f396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.131 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.pid.haproxy
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID c60162ec-f468-4b5f-bd91-89b0a1cb9fa1
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:44:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:40.132 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'env', 'PROCESS_TAG=haproxy-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c60162ec-f468-4b5f-bd91-89b0a1cb9fa1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.160 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.354 233728 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updated VIF entry in instance network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.355 233728 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.380 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.381 233728 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.381 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.382 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.382 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.382 233728 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.382 233728 WARNING nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.383 233728 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.383 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.383 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.383 233728 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.384 233728 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.384 233728 WARNING nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.427 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405880.4264777, 4208aeda-c433-4ac5-8312-fb09ac9f4f83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.427 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] VM Started (Lifecycle Event)#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.446 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.450 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405880.4266763, 4208aeda-c433-4ac5-8312-fb09ac9f4f83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.451 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.470 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.473 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.493 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:40 np0005539552 podman[317716]: 2025-11-29 08:44:40.558437722 +0000 UTC m=+0.055125004 container create b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:44:40 np0005539552 systemd[1]: Started libpod-conmon-b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820.scope.
Nov 29 03:44:40 np0005539552 podman[317716]: 2025-11-29 08:44:40.529936475 +0000 UTC m=+0.026623757 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:44:40 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:44:40 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0789c5d8da3dc1a101e4c54e832767b6ce33f3e4b9490627316e535550e72fe1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:40 np0005539552 podman[317716]: 2025-11-29 08:44:40.65827318 +0000 UTC m=+0.154960442 container init b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:44:40 np0005539552 podman[317716]: 2025-11-29 08:44:40.669580954 +0000 UTC m=+0.166268196 container start b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.672 233728 DEBUG nova.network.neutron [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updated VIF entry in instance network info cache for port 5e109659-0e3c-4696-9678-2c5962204b98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.673 233728 DEBUG nova.network.neutron [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updating instance_info_cache with network_info: [{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:40 np0005539552 nova_compute[233724]: 2025-11-29 08:44:40.687 233728 DEBUG oslo_concurrency.lockutils [req-c8576655-c33c-4bc4-87bf-c223119b2bd3 req-a2d7500c-0cd2-4fda-be8e-2dc0f219cfed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:40 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [NOTICE]   (317736) : New worker (317738) forked
Nov 29 03:44:40 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [NOTICE]   (317736) : Loading success.
Nov 29 03:44:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.662 233728 DEBUG nova.compute.manager [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.663 233728 DEBUG oslo_concurrency.lockutils [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.666 233728 DEBUG oslo_concurrency.lockutils [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.667 233728 DEBUG oslo_concurrency.lockutils [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.667 233728 DEBUG nova.compute.manager [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Processing event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.667 233728 DEBUG nova.compute.manager [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.668 233728 DEBUG oslo_concurrency.lockutils [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.668 233728 DEBUG oslo_concurrency.lockutils [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.669 233728 DEBUG oslo_concurrency.lockutils [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.669 233728 DEBUG nova.compute.manager [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] No waiting events found dispatching network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.669 233728 WARNING nova.compute.manager [req-5209dd7f-4555-4d5a-8787-d7dac05917f7 req-e530cb5e-ac82-4794-a384-d3debf72ac38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received unexpected event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.670 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.675 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405881.6748097, 4208aeda-c433-4ac5-8312-fb09ac9f4f83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.675 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.677 233728 DEBUG nova.virt.libvirt.driver [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.680 233728 INFO nova.virt.libvirt.driver [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Instance spawned successfully.#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.681 233728 INFO nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Took 7.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.681 233728 DEBUG nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.715 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.720 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.779 233728 INFO nova.compute.manager [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Took 8.89 seconds to build instance.#033[00m
Nov 29 03:44:41 np0005539552 nova_compute[233724]: 2025-11-29 08:44:41.795 233728 DEBUG oslo_concurrency.lockutils [None req-6e2dc4c0-8dd7-4353-a22d-f81ab8c7686b 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:42 np0005539552 nova_compute[233724]: 2025-11-29 08:44:42.429 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:42.627 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:42.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.171 233728 DEBUG nova.compute.manager [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.171 233728 DEBUG nova.compute.manager [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing instance network info cache due to event network-changed-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.172 233728 DEBUG oslo_concurrency.lockutils [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.172 233728 DEBUG oslo_concurrency.lockutils [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.172 233728 DEBUG nova.network.neutron [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Refreshing network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.314 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.314 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.315 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.316 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.317 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.318 233728 INFO nova.compute.manager [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Terminating instance#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.320 233728 DEBUG nova.compute.manager [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:44:43 np0005539552 kernel: tapffc8d5ab-93 (unregistering): left promiscuous mode
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.393 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 NetworkManager[48926]: <info>  [1764405883.3955] device (tapffc8d5ab-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00905|binding|INFO|Releasing lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f from this chassis (sb_readonly=0)
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00906|binding|INFO|Setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f down in Southbound
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00907|binding|INFO|Removing iface tapffc8d5ab-93 ovn-installed in OVS
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.399 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.403 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:46:19 10.100.0.6'], port_security=['fa:16:3e:cc:46:19 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a9139ee-300c-4b0c-897f-218b8cde7e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '8', 'neutron:security_group_ids': '643f49e8-7bc0-48db-8ba9-730954386800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=019e6a06-0d9a-4978-9f73-700cb716f877, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.405 143400 INFO neutron.agent.ovn.metadata.agent [-] Port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f in datapath faff47d9-b197-488e-92f7-8e8d5ec1eec7 unbound from our chassis#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.408 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faff47d9-b197-488e-92f7-8e8d5ec1eec7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.409 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[410c9542-388c-4f13-9947-2d3993af8165]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.409 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 namespace which is not needed anymore#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.411 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Nov 29 03:44:43 np0005539552 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c9.scope: Consumed 18.737s CPU time.
Nov 29 03:44:43 np0005539552 systemd-machined[196379]: Machine qemu-91-instance-000000c9 terminated.
Nov 29 03:44:43 np0005539552 podman[317749]: 2025-11-29 08:44:43.518465499 +0000 UTC m=+0.087437615 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:43 np0005539552 podman[317752]: 2025-11-29 08:44:43.52259121 +0000 UTC m=+0.090825676 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:44:43 np0005539552 kernel: tapffc8d5ab-93: entered promiscuous mode
Nov 29 03:44:43 np0005539552 NetworkManager[48926]: <info>  [1764405883.5397] manager: (tapffc8d5ab-93): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.540 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00908|binding|INFO|Claiming lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for this chassis.
Nov 29 03:44:43 np0005539552 kernel: tapffc8d5ab-93 (unregistering): left promiscuous mode
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00909|binding|INFO|ffc8d5ab-9373-4bc4-b60e-cde39ecf014f: Claiming fa:16:3e:cc:46:19 10.100.0.6
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.552 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:46:19 10.100.0.6'], port_security=['fa:16:3e:cc:46:19 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a9139ee-300c-4b0c-897f-218b8cde7e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '8', 'neutron:security_group_ids': '643f49e8-7bc0-48db-8ba9-730954386800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=019e6a06-0d9a-4978-9f73-700cb716f877, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.563 233728 INFO nova.virt.libvirt.driver [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Instance destroyed successfully.#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.564 233728 DEBUG nova.objects.instance [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 3a9139ee-300c-4b0c-897f-218b8cde7e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00910|binding|INFO|Setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f ovn-installed in OVS
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.567 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00911|binding|INFO|Setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f up in Southbound
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00912|binding|INFO|Releasing lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f from this chassis (sb_readonly=1)
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00913|if_status|INFO|Dropped 2 log messages in last 751 seconds (most recently, 751 seconds ago) due to excessive rate
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00914|if_status|INFO|Not setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f down as sb is readonly
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00915|binding|INFO|Removing iface tapffc8d5ab-93 ovn-installed in OVS
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.572 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00916|binding|INFO|Releasing lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f from this chassis (sb_readonly=0)
Nov 29 03:44:43 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:43Z|00917|binding|INFO|Setting lport ffc8d5ab-9373-4bc4-b60e-cde39ecf014f down in Southbound
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.577 233728 DEBUG nova.virt.libvirt.vif [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-271286906',display_name='tempest-TestNetworkBasicOps-server-271286906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-271286906',id=201,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/6bzZlSe8/jL2wzb/LbfFMoJ0Go2sJh9V7k/5pnGSkLdFEr6m7Swr+/JX1lcABMOTiHa+dqs7Tbn8uamcNOWXDhw/Wnug8RQEdhnNhkG5pMjJwrIrNrkC9cj1YcSvhqw==',key_name='tempest-TestNetworkBasicOps-107713680',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-x774dwg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:31Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=3a9139ee-300c-4b0c-897f-218b8cde7e38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.577 233728 DEBUG nova.network.os_vif_util [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.578 233728 DEBUG nova.network.os_vif_util [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.578 233728 DEBUG os_vif [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.580 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.580 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffc8d5ab-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.580 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:46:19 10.100.0.6'], port_security=['fa:16:3e:cc:46:19 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a9139ee-300c-4b0c-897f-218b8cde7e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '8', 'neutron:security_group_ids': '643f49e8-7bc0-48db-8ba9-730954386800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=019e6a06-0d9a-4978-9f73-700cb716f877, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:43 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [NOTICE]   (316719) : haproxy version is 2.8.14-c23fe91
Nov 29 03:44:43 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [NOTICE]   (316719) : path to executable is /usr/sbin/haproxy
Nov 29 03:44:43 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [WARNING]  (316719) : Exiting Master process...
Nov 29 03:44:43 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [WARNING]  (316719) : Exiting Master process...
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.582 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [ALERT]    (316719) : Current worker (316721) exited with code 143 (Terminated)
Nov 29 03:44:43 np0005539552 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[316715]: [WARNING]  (316719) : All workers exited. Exiting... (0)
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.587 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:43 np0005539552 systemd[1]: libpod-aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f.scope: Deactivated successfully.
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.591 233728 INFO os_vif [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:46:19,bridge_name='br-int',has_traffic_filtering=True,id=ffc8d5ab-9373-4bc4-b60e-cde39ecf014f,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffc8d5ab-93')#033[00m
Nov 29 03:44:43 np0005539552 podman[317827]: 2025-11-29 08:44:43.595882712 +0000 UTC m=+0.067703743 container died aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:44:43 np0005539552 podman[317753]: 2025-11-29 08:44:43.606756035 +0000 UTC m=+0.178817954 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 03:44:43 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:44:43 np0005539552 systemd[1]: var-lib-containers-storage-overlay-9d4f1f7fbce742442dc6c0ce1b76df61a4f463db1c6cc6dd9852a81522d6d1bd-merged.mount: Deactivated successfully.
Nov 29 03:44:43 np0005539552 podman[317827]: 2025-11-29 08:44:43.641216273 +0000 UTC m=+0.113037304 container cleanup aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:44:43 np0005539552 systemd[1]: libpod-conmon-aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f.scope: Deactivated successfully.
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.685 233728 DEBUG nova.compute.manager [req-e042e38b-26cf-4494-a89b-d96d3de6aa12 req-1ae484da-2022-4236-9302-40eae6a1301d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.685 233728 DEBUG oslo_concurrency.lockutils [req-e042e38b-26cf-4494-a89b-d96d3de6aa12 req-1ae484da-2022-4236-9302-40eae6a1301d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.686 233728 DEBUG oslo_concurrency.lockutils [req-e042e38b-26cf-4494-a89b-d96d3de6aa12 req-1ae484da-2022-4236-9302-40eae6a1301d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.686 233728 DEBUG oslo_concurrency.lockutils [req-e042e38b-26cf-4494-a89b-d96d3de6aa12 req-1ae484da-2022-4236-9302-40eae6a1301d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.686 233728 DEBUG nova.compute.manager [req-e042e38b-26cf-4494-a89b-d96d3de6aa12 req-1ae484da-2022-4236-9302-40eae6a1301d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.686 233728 DEBUG nova.compute.manager [req-e042e38b-26cf-4494-a89b-d96d3de6aa12 req-1ae484da-2022-4236-9302-40eae6a1301d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:44:43 np0005539552 podman[317883]: 2025-11-29 08:44:43.705877903 +0000 UTC m=+0.042901765 container remove aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.716 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2abae7-0357-4801-94cd-6bd8d60bb057]: (4, ('Sat Nov 29 08:44:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 (aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f)\naa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f\nSat Nov 29 08:44:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 (aa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f)\naa6fb338de06e16a885f29c3a80ec1cca8338b73435a58972609cedd4b03632f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.718 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bb475458-db35-43f9-824b-306a300418c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.719 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaff47d9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.720 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 kernel: tapfaff47d9-b0: left promiscuous mode
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.722 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.724 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0db9a5b7-6672-473d-ab38-12d21f4b083b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.744 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.749 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c81102c9-a159-44af-8b91-d8d52284db52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.751 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b8bd45-2363-4d38-af8e-a7ab3e611b03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.768 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34013aca-3d8f-4b93-9abf-505eb4079d73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885606, 'reachable_time': 17206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317899, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 systemd[1]: run-netns-ovnmeta\x2dfaff47d9\x2db197\x2d488e\x2d92f7\x2d8e8d5ec1eec7.mount: Deactivated successfully.
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.774 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.774 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bb34f2-6bcc-449e-9a65-cfdca52c580b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.775 143400 INFO neutron.agent.ovn.metadata.agent [-] Port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f in datapath faff47d9-b197-488e-92f7-8e8d5ec1eec7 unbound from our chassis#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.777 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faff47d9-b197-488e-92f7-8e8d5ec1eec7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.778 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a565e918-5514-4136-a1fb-14dce2814330]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.779 143400 INFO neutron.agent.ovn.metadata.agent [-] Port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f in datapath faff47d9-b197-488e-92f7-8e8d5ec1eec7 unbound from our chassis#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.781 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faff47d9-b197-488e-92f7-8e8d5ec1eec7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:43 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:44:43.781 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[65ebc329-dc8b-4d58-b07e-9c74d67b4a4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.989 233728 INFO nova.virt.libvirt.driver [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Deleting instance files /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38_del#033[00m
Nov 29 03:44:43 np0005539552 nova_compute[233724]: 2025-11-29 08:44:43.990 233728 INFO nova.virt.libvirt.driver [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Deletion of /var/lib/nova/instances/3a9139ee-300c-4b0c-897f-218b8cde7e38_del complete#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.059 233728 INFO nova.compute.manager [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.060 233728 DEBUG oslo.service.loopingcall [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.061 233728 DEBUG nova.compute.manager [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.061 233728 DEBUG nova.network.neutron [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.570 233728 DEBUG nova.network.neutron [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updated VIF entry in instance network info cache for port ffc8d5ab-9373-4bc4-b60e-cde39ecf014f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.571 233728 DEBUG nova.network.neutron [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [{"id": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "address": "fa:16:3e:cc:46:19", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffc8d5ab-93", "ovs_interfaceid": "ffc8d5ab-9373-4bc4-b60e-cde39ecf014f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.594 233728 DEBUG oslo_concurrency.lockutils [req-99438214-0f14-4333-b502-00559640df5a req-53e33c93-87eb-4e8e-a7c3-a70b67c62469 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3a9139ee-300c-4b0c-897f-218b8cde7e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.954 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.955 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:44:44 np0005539552 nova_compute[233724]: 2025-11-29 08:44:44.955 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:44.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.353 233728 DEBUG nova.network.neutron [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.379 233728 INFO nova.compute.manager [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Took 1.32 seconds to deallocate network for instance.#033[00m
Nov 29 03:44:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/160701425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.441 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.452 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.453 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.536 233728 DEBUG oslo_concurrency.processutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.583 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.583 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.694 233728 DEBUG nova.compute.manager [req-43143e9e-e341-4439-b561-b44e588ca7ff req-48636bc1-2767-4523-8862-78dd543792c2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-deleted-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.786 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.787 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.787 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.787 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.787 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.787 233728 WARNING nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 WARNING nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.788 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 WARNING nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.789 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 WARNING nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-unplugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.790 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.791 233728 DEBUG oslo_concurrency.lockutils [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.791 233728 DEBUG nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] No waiting events found dispatching network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.791 233728 WARNING nova.compute.manager [req-8ea65061-d7f0-4dae-afa8-8c8f5124eaef req-06c5ba22-d7bc-415d-96ed-a314b506bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Received unexpected event network-vif-plugged-ffc8d5ab-9373-4bc4-b60e-cde39ecf014f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.836 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.837 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3951MB free_disk=20.85114288330078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:44:45 np0005539552 nova_compute[233724]: 2025-11-29 08:44:45.837 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3079687348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.131 233728 DEBUG oslo_concurrency.processutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.137 233728 DEBUG nova.compute.provider_tree [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.152 233728 DEBUG nova.scheduler.client.report [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.173 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.177 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:46 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:46Z|00918|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.207 233728 INFO nova.scheduler.client.report [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 3a9139ee-300c-4b0c-897f-218b8cde7e38#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.253 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 4208aeda-c433-4ac5-8312-fb09ac9f4f83 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.254 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.254 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.292 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.341 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.348 233728 DEBUG oslo_concurrency.lockutils [None req-c1e56cd3-994d-4c7f-aec6-a6a15c12d496 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "3a9139ee-300c-4b0c-897f-218b8cde7e38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3146867387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.870 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.877 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.888 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.906 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:44:46 np0005539552 nova_compute[233724]: 2025-11-29 08:44:46.906 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:46.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:47 np0005539552 nova_compute[233724]: 2025-11-29 08:44:47.431 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:47 np0005539552 nova_compute[233724]: 2025-11-29 08:44:47.901 233728 DEBUG nova.compute.manager [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-changed-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:47 np0005539552 nova_compute[233724]: 2025-11-29 08:44:47.902 233728 DEBUG nova.compute.manager [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Refreshing instance network info cache due to event network-changed-5e109659-0e3c-4696-9678-2c5962204b98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:47 np0005539552 nova_compute[233724]: 2025-11-29 08:44:47.903 233728 DEBUG oslo_concurrency.lockutils [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:47 np0005539552 nova_compute[233724]: 2025-11-29 08:44:47.904 233728 DEBUG oslo_concurrency.lockutils [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:47 np0005539552 nova_compute[233724]: 2025-11-29 08:44:47.904 233728 DEBUG nova.network.neutron [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Refreshing network info cache for port 5e109659-0e3c-4696-9678-2c5962204b98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:48 np0005539552 nova_compute[233724]: 2025-11-29 08:44:48.582 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:49.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:49 np0005539552 nova_compute[233724]: 2025-11-29 08:44:49.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:49Z|00919|binding|INFO|Releasing lport 1cbe83e7-a1b1-4865-9743-7778d9db9685 from this chassis (sb_readonly=0)
Nov 29 03:44:49 np0005539552 nova_compute[233724]: 2025-11-29 08:44:49.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539552 nova_compute[233724]: 2025-11-29 08:44:50.485 233728 DEBUG nova.network.neutron [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updated VIF entry in instance network info cache for port 5e109659-0e3c-4696-9678-2c5962204b98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:50 np0005539552 nova_compute[233724]: 2025-11-29 08:44:50.486 233728 DEBUG nova.network.neutron [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updating instance_info_cache with network_info: [{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:50 np0005539552 nova_compute[233724]: 2025-11-29 08:44:50.508 233728 DEBUG oslo_concurrency.lockutils [req-604460a6-2c87-41e6-817b-7a91cdc5462f req-eda7cb67-2ebf-4848-8272-7025c8811a92 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:50.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:51.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:52 np0005539552 nova_compute[233724]: 2025-11-29 08:44:52.433 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:52 np0005539552 nova_compute[233724]: 2025-11-29 08:44:52.910 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:52 np0005539552 nova_compute[233724]: 2025-11-29 08:44:52.911 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:44:52 np0005539552 nova_compute[233724]: 2025-11-29 08:44:52.947 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:52.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:53 np0005539552 nova_compute[233724]: 2025-11-29 08:44:53.585 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:53 np0005539552 nova_compute[233724]: 2025-11-29 08:44:53.922 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:53 np0005539552 nova_compute[233724]: 2025-11-29 08:44:53.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:54.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:56 np0005539552 nova_compute[233724]: 2025-11-29 08:44:56.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:56 np0005539552 nova_compute[233724]: 2025-11-29 08:44:56.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:57.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:57Z|00119|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.5
Nov 29 03:44:57 np0005539552 ovn_controller[133798]: 2025-11-29T08:44:57Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:88:43:29 10.100.0.5
Nov 29 03:44:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:57 np0005539552 nova_compute[233724]: 2025-11-29 08:44:57.436 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.118 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.558 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405883.5567186, 3a9139ee-300c-4b0c-897f-218b8cde7e38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.559 233728 INFO nova.compute.manager [-] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.596 233728 DEBUG nova.compute.manager [None req-ef553680-9caf-43d6-ac19-51e056150dd7 - - - - - -] [instance: 3a9139ee-300c-4b0c-897f-218b8cde7e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:58 np0005539552 nova_compute[233724]: 2025-11-29 08:44:58.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:44:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:59.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:00 np0005539552 nova_compute[233724]: 2025-11-29 08:45:00.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:01Z|00121|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.5
Nov 29 03:45:01 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:01Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:88:43:29 10.100.0.5
Nov 29 03:45:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:01.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:01 np0005539552 nova_compute[233724]: 2025-11-29 08:45:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:01 np0005539552 nova_compute[233724]: 2025-11-29 08:45:01.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:45:01 np0005539552 nova_compute[233724]: 2025-11-29 08:45:01.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:45:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:02Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:43:29 10.100.0.5
Nov 29 03:45:02 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:02Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:43:29 10.100.0.5
Nov 29 03:45:02 np0005539552 nova_compute[233724]: 2025-11-29 08:45:02.438 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:02 np0005539552 nova_compute[233724]: 2025-11-29 08:45:02.446 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:02 np0005539552 nova_compute[233724]: 2025-11-29 08:45:02.447 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:02 np0005539552 nova_compute[233724]: 2025-11-29 08:45:02.447 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:45:02 np0005539552 nova_compute[233724]: 2025-11-29 08:45:02.448 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4208aeda-c433-4ac5-8312-fb09ac9f4f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:03 np0005539552 nova_compute[233724]: 2025-11-29 08:45:03.587 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:05.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:06 np0005539552 nova_compute[233724]: 2025-11-29 08:45:06.876 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updating instance_info_cache with network_info: [{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:06 np0005539552 nova_compute[233724]: 2025-11-29 08:45:06.897 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:06 np0005539552 nova_compute[233724]: 2025-11-29 08:45:06.897 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:45:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:07 np0005539552 nova_compute[233724]: 2025-11-29 08:45:07.441 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:45:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 68K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1681 writes, 8610 keys, 1681 commit groups, 1.0 writes per commit group, ingest: 16.52 MB, 0.03 MB/s#012Interval WAL: 1681 writes, 1681 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     26.5      3.22              0.28        43    0.075       0      0       0.0       0.0#012  L6      1/0   12.26 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.4     99.5     85.9      5.34              1.34        42    0.127    314K    22K       0.0       0.0#012 Sum      1/0   12.26 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.4     62.1     63.6      8.56              1.62        85    0.101    314K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.1    116.6    116.7      0.81              0.37        14    0.058     69K   3684       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     99.5     85.9      5.34              1.34        42    0.127    314K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     26.9      3.17              0.28        42    0.076       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.1 total, 600.0 interval#012Flush(GB): cumulative 0.083, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.53 GB write, 0.10 MB/s write, 0.52 GB read, 0.10 MB/s read, 8.6 seconds#012Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 56.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000425 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3167,53.87 MB,17.7214%) FilterBlock(85,867.05 KB,0.278528%) IndexBlock(85,1.45 MB,0.477148%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:45:08 np0005539552 nova_compute[233724]: 2025-11-29 08:45:08.589 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:09.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:11.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:11 np0005539552 nova_compute[233724]: 2025-11-29 08:45:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:11 np0005539552 nova_compute[233724]: 2025-11-29 08:45:11.948 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:11 np0005539552 nova_compute[233724]: 2025-11-29 08:45:11.949 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:45:12 np0005539552 nova_compute[233724]: 2025-11-29 08:45:12.444 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:13.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:13.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:13 np0005539552 nova_compute[233724]: 2025-11-29 08:45:13.591 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:14 np0005539552 podman[318091]: 2025-11-29 08:45:14.010424173 +0000 UTC m=+0.089237193 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:45:14 np0005539552 podman[318090]: 2025-11-29 08:45:14.029986679 +0000 UTC m=+0.108450310 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:45:14 np0005539552 podman[318092]: 2025-11-29 08:45:14.07831988 +0000 UTC m=+0.151576111 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:17.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:17 np0005539552 nova_compute[233724]: 2025-11-29 08:45:17.447 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:18 np0005539552 nova_compute[233724]: 2025-11-29 08:45:18.593 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:19.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:19.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:20 np0005539552 nova_compute[233724]: 2025-11-29 08:45:20.406 233728 DEBUG nova.compute.manager [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:20 np0005539552 nova_compute[233724]: 2025-11-29 08:45:20.470 233728 INFO nova.compute.manager [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] instance snapshotting#033[00m
Nov 29 03:45:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:20.651 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:20.652 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:20.653 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:21 np0005539552 nova_compute[233724]: 2025-11-29 08:45:21.204 233728 INFO nova.virt.libvirt.driver [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Beginning live snapshot process#033[00m
Nov 29 03:45:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:21 np0005539552 nova_compute[233724]: 2025-11-29 08:45:21.406 233728 DEBUG nova.storage.rbd_utils [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] creating snapshot(296de05c4cc641a98863054d38a0d3c1) on rbd image(4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:45:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 03:45:22 np0005539552 nova_compute[233724]: 2025-11-29 08:45:22.023 233728 DEBUG nova.storage.rbd_utils [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] cloning vms/4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk@296de05c4cc641a98863054d38a0d3c1 to images/ea9de80a-851e-4cb9-ab40-64d7952ffbd1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:45:22 np0005539552 nova_compute[233724]: 2025-11-29 08:45:22.186 233728 DEBUG nova.storage.rbd_utils [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] flattening images/ea9de80a-851e-4cb9-ab40-64d7952ffbd1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:45:22 np0005539552 nova_compute[233724]: 2025-11-29 08:45:22.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:22 np0005539552 nova_compute[233724]: 2025-11-29 08:45:22.749 233728 DEBUG nova.storage.rbd_utils [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] removing snapshot(296de05c4cc641a98863054d38a0d3c1) on rbd image(4208aeda-c433-4ac5-8312-fb09ac9f4f83_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:45:22 np0005539552 nova_compute[233724]: 2025-11-29 08:45:22.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 03:45:23 np0005539552 nova_compute[233724]: 2025-11-29 08:45:23.035 233728 DEBUG nova.storage.rbd_utils [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] creating snapshot(snap) on rbd image(ea9de80a-851e-4cb9-ab40-64d7952ffbd1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:45:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:23 np0005539552 nova_compute[233724]: 2025-11-29 08:45:23.596 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 03:45:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:27.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:27 np0005539552 nova_compute[233724]: 2025-11-29 08:45:27.145 233728 INFO nova.virt.libvirt.driver [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Snapshot image upload complete#033[00m
Nov 29 03:45:27 np0005539552 nova_compute[233724]: 2025-11-29 08:45:27.146 233728 INFO nova.compute.manager [None req-9b679fd6-1b15-4014-909c-344ae955347d 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Took 6.67 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:45:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:27 np0005539552 nova_compute[233724]: 2025-11-29 08:45:27.454 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 03:45:28 np0005539552 nova_compute[233724]: 2025-11-29 08:45:28.599 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:29.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.063 233728 DEBUG nova.compute.manager [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-changed-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.064 233728 DEBUG nova.compute.manager [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Refreshing instance network info cache due to event network-changed-5e109659-0e3c-4696-9678-2c5962204b98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.064 233728 DEBUG oslo_concurrency.lockutils [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.064 233728 DEBUG oslo_concurrency.lockutils [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.065 233728 DEBUG nova.network.neutron [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Refreshing network info cache for port 5e109659-0e3c-4696-9678-2c5962204b98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.262 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.263 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.264 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.264 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.265 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.267 233728 INFO nova.compute.manager [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Terminating instance#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.270 233728 DEBUG nova.compute.manager [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:45:30 np0005539552 kernel: tap5e109659-0e (unregistering): left promiscuous mode
Nov 29 03:45:30 np0005539552 NetworkManager[48926]: <info>  [1764405930.3353] device (tap5e109659-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:30Z|00920|binding|INFO|Releasing lport 5e109659-0e3c-4696-9678-2c5962204b98 from this chassis (sb_readonly=0)
Nov 29 03:45:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:30Z|00921|binding|INFO|Setting lport 5e109659-0e3c-4696-9678-2c5962204b98 down in Southbound
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.345 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:30Z|00922|binding|INFO|Removing iface tap5e109659-0e ovn-installed in OVS
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.349 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.354 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:43:29 10.100.0.5'], port_security=['fa:16:3e:88:43:29 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4208aeda-c433-4ac5-8312-fb09ac9f4f83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66ee3b60fb89476383201ba204858d4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '291b52bf-18bb-41c3-a977-3e030dbdb988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b579620-af3c-4534-8cbc-29d18f0dd8a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=5e109659-0e3c-4696-9678-2c5962204b98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.356 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 5e109659-0e3c-4696-9678-2c5962204b98 in datapath c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 unbound from our chassis#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.359 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.360 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e0046f-c7d9-4256-b353-943ec923c1f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.361 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 namespace which is not needed anymore#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.383 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Nov 29 03:45:30 np0005539552 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000ce.scope: Consumed 17.357s CPU time.
Nov 29 03:45:30 np0005539552 systemd-machined[196379]: Machine qemu-92-instance-000000ce terminated.
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.517 233728 INFO nova.virt.libvirt.driver [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Instance destroyed successfully.#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.517 233728 DEBUG nova.objects.instance [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lazy-loading 'resources' on Instance uuid 4208aeda-c433-4ac5-8312-fb09ac9f4f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.530 233728 DEBUG nova.virt.libvirt.vif [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-767379208',display_name='tempest-TestSnapshotPattern-server-767379208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-767379208',id=206,image_ref='19c04997-1e5f-42a9-90ff-a53c93a49ed0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvmAjf7VDN0MoYAmHVgdY8l8+1v5wjwJNh4fBpCc/IwM7etIRNnxNIuXJ33y4wtb07HCVtVAHbkNdZ/qEkgOQyG3Oc8WVN/7z3fiAu47wM+5lJvW0Y+dOBmLvwkMU2fbA==',key_name='tempest-TestSnapshotPattern-945427268',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='66ee3b60fb89476383201ba204858d4d',ramdisk_id='',reservation_id='r-0d89fxqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='da14f334-c7fe-428d-b6d2-32c2f4cc4054',image_min_disk='1',image_min_ram='0',image_owner_id='66ee3b60fb89476383201ba204858d4d',image_owner_project_name='tempest-TestSnapshotPattern-1740419556',image_owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member',image_user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1740419556',owner_user_name='tempest-TestSnapshotPattern-1740419556-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:27Z,user_data=None,user_id='5ac9cfdaf51b4a5ab874f7e3571f88a0',uuid=4208aeda-c433-4ac5-8312-fb09ac9f4f83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.531 233728 DEBUG nova.network.os_vif_util [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converting VIF {"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.532 233728 DEBUG nova.network.os_vif_util [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.532 233728 DEBUG os_vif [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.534 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.535 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e109659-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.536 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.540 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.543 233728 INFO os_vif [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:43:29,bridge_name='br-int',has_traffic_filtering=True,id=5e109659-0e3c-4696-9678-2c5962204b98,network=Network(c60162ec-f468-4b5f-bd91-89b0a1cb9fa1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e109659-0e')#033[00m
Nov 29 03:45:30 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [NOTICE]   (317736) : haproxy version is 2.8.14-c23fe91
Nov 29 03:45:30 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [NOTICE]   (317736) : path to executable is /usr/sbin/haproxy
Nov 29 03:45:30 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [WARNING]  (317736) : Exiting Master process...
Nov 29 03:45:30 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [ALERT]    (317736) : Current worker (317738) exited with code 143 (Terminated)
Nov 29 03:45:30 np0005539552 neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1[317732]: [WARNING]  (317736) : All workers exited. Exiting... (0)
Nov 29 03:45:30 np0005539552 systemd[1]: libpod-b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820.scope: Deactivated successfully.
Nov 29 03:45:30 np0005539552 podman[318506]: 2025-11-29 08:45:30.569127651 +0000 UTC m=+0.069033870 container died b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:45:30 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820-userdata-shm.mount: Deactivated successfully.
Nov 29 03:45:30 np0005539552 systemd[1]: var-lib-containers-storage-overlay-0789c5d8da3dc1a101e4c54e832767b6ce33f3e4b9490627316e535550e72fe1-merged.mount: Deactivated successfully.
Nov 29 03:45:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 03:45:30 np0005539552 podman[318506]: 2025-11-29 08:45:30.626137025 +0000 UTC m=+0.126043244 container cleanup b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:45:30 np0005539552 systemd[1]: libpod-conmon-b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820.scope: Deactivated successfully.
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.677 233728 DEBUG nova.compute.manager [req-07d573cf-1078-430d-a581-48b8c32d5606 req-2c2d5d26-a690-4a85-a747-96f20ae0cf41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-vif-unplugged-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.677 233728 DEBUG oslo_concurrency.lockutils [req-07d573cf-1078-430d-a581-48b8c32d5606 req-2c2d5d26-a690-4a85-a747-96f20ae0cf41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.678 233728 DEBUG oslo_concurrency.lockutils [req-07d573cf-1078-430d-a581-48b8c32d5606 req-2c2d5d26-a690-4a85-a747-96f20ae0cf41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.678 233728 DEBUG oslo_concurrency.lockutils [req-07d573cf-1078-430d-a581-48b8c32d5606 req-2c2d5d26-a690-4a85-a747-96f20ae0cf41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.678 233728 DEBUG nova.compute.manager [req-07d573cf-1078-430d-a581-48b8c32d5606 req-2c2d5d26-a690-4a85-a747-96f20ae0cf41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] No waiting events found dispatching network-vif-unplugged-5e109659-0e3c-4696-9678-2c5962204b98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.678 233728 DEBUG nova.compute.manager [req-07d573cf-1078-430d-a581-48b8c32d5606 req-2c2d5d26-a690-4a85-a747-96f20ae0cf41 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-vif-unplugged-5e109659-0e3c-4696-9678-2c5962204b98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:45:30 np0005539552 podman[318565]: 2025-11-29 08:45:30.698126023 +0000 UTC m=+0.043287006 container remove b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.704 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2502dd41-a2e1-4e4e-936c-837487b44da1]: (4, ('Sat Nov 29 08:45:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 (b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820)\nb60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820\nSat Nov 29 08:45:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 (b60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820)\nb60b86d0ff929e77aa7f18544c46c7639fd276ca1cccb0cc893c871148257820\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.707 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2645c6ac-ebaf-4b1b-8399-234019ac15c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.708 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc60162ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.711 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 kernel: tapc60162ec-f0: left promiscuous mode
Nov 29 03:45:30 np0005539552 nova_compute[233724]: 2025-11-29 08:45:30.724 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.727 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2167c6-c222-422a-b3a2-09863474ef10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.739 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[36cde9ce-9d84-4937-bb3d-81a97903f6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.740 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b931f1-22ce-4b90-83c5-d46486bb3867]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.757 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[941bbbca-aebd-474c-ab6b-c7168c53bb33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892549, 'reachable_time': 18487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318580, 'error': None, 'target': 'ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539552 systemd[1]: run-netns-ovnmeta\x2dc60162ec\x2df468\x2d4b5f\x2dbd91\x2d89b0a1cb9fa1.mount: Deactivated successfully.
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.761 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c60162ec-f468-4b5f-bd91-89b0a1cb9fa1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:45:30 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:30.761 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[3357c3c8-33fc-499b-9512-124b060accf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.072 233728 INFO nova.virt.libvirt.driver [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Deleting instance files /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83_del#033[00m
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.073 233728 INFO nova.virt.libvirt.driver [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Deletion of /var/lib/nova/instances/4208aeda-c433-4ac5-8312-fb09ac9f4f83_del complete#033[00m
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.139 233728 INFO nova.compute.manager [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.140 233728 DEBUG oslo.service.loopingcall [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.141 233728 DEBUG nova.compute.manager [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.141 233728 DEBUG nova.network.neutron [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:45:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:31.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:31 np0005539552 nova_compute[233724]: 2025-11-29 08:45:31.974 233728 DEBUG nova.network.neutron [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.000 233728 DEBUG nova.network.neutron [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updated VIF entry in instance network info cache for port 5e109659-0e3c-4696-9678-2c5962204b98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.000 233728 DEBUG nova.network.neutron [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Updating instance_info_cache with network_info: [{"id": "5e109659-0e3c-4696-9678-2c5962204b98", "address": "fa:16:3e:88:43:29", "network": {"id": "c60162ec-f468-4b5f-bd91-89b0a1cb9fa1", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-514529391-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66ee3b60fb89476383201ba204858d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e109659-0e", "ovs_interfaceid": "5e109659-0e3c-4696-9678-2c5962204b98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.003 233728 INFO nova.compute.manager [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Took 0.86 seconds to deallocate network for instance.#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.041 233728 DEBUG oslo_concurrency.lockutils [req-a6b8e7eb-1614-44fa-b36e-0f62d5a7deb9 req-cef3aaf5-a496-4076-af84-03a51398b9e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4208aeda-c433-4ac5-8312-fb09ac9f4f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.080 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.081 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.157 233728 DEBUG oslo_concurrency.processutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.456 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/128524030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.622 233728 DEBUG oslo_concurrency.processutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.628 233728 DEBUG nova.compute.provider_tree [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.646 233728 DEBUG nova.scheduler.client.report [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.671 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.714 233728 INFO nova.scheduler.client.report [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Deleted allocations for instance 4208aeda-c433-4ac5-8312-fb09ac9f4f83#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.813 233728 DEBUG nova.compute.manager [req-9f3b3fd8-c588-4154-8be5-ac83aca2866b req-2a0a25bb-34e4-4f97-998f-760d89903260 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.814 233728 DEBUG oslo_concurrency.lockutils [req-9f3b3fd8-c588-4154-8be5-ac83aca2866b req-2a0a25bb-34e4-4f97-998f-760d89903260 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.814 233728 DEBUG oslo_concurrency.lockutils [req-9f3b3fd8-c588-4154-8be5-ac83aca2866b req-2a0a25bb-34e4-4f97-998f-760d89903260 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.815 233728 DEBUG oslo_concurrency.lockutils [req-9f3b3fd8-c588-4154-8be5-ac83aca2866b req-2a0a25bb-34e4-4f97-998f-760d89903260 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.815 233728 DEBUG nova.compute.manager [req-9f3b3fd8-c588-4154-8be5-ac83aca2866b req-2a0a25bb-34e4-4f97-998f-760d89903260 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] No waiting events found dispatching network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.815 233728 WARNING nova.compute.manager [req-9f3b3fd8-c588-4154-8be5-ac83aca2866b req-2a0a25bb-34e4-4f97-998f-760d89903260 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received unexpected event network-vif-plugged-5e109659-0e3c-4696-9678-2c5962204b98 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:32 np0005539552 nova_compute[233724]: 2025-11-29 08:45:32.818 233728 DEBUG oslo_concurrency.lockutils [None req-98121fb0-814b-4110-8728-e9ba86b0d9df 5ac9cfdaf51b4a5ab874f7e3571f88a0 66ee3b60fb89476383201ba204858d4d - - default default] Lock "4208aeda-c433-4ac5-8312-fb09ac9f4f83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:33 np0005539552 nova_compute[233724]: 2025-11-29 08:45:33.408 233728 DEBUG nova.compute.manager [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Received event network-vif-deleted-5e109659-0e3c-4696-9678-2c5962204b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:33 np0005539552 nova_compute[233724]: 2025-11-29 08:45:33.409 233728 INFO nova.compute.manager [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Neutron deleted interface 5e109659-0e3c-4696-9678-2c5962204b98; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:45:33 np0005539552 nova_compute[233724]: 2025-11-29 08:45:33.410 233728 DEBUG nova.network.neutron [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:45:33 np0005539552 nova_compute[233724]: 2025-11-29 08:45:33.413 233728 DEBUG nova.compute.manager [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Detach interface failed, port_id=5e109659-0e3c-4696-9678-2c5962204b98, reason: Instance 4208aeda-c433-4ac5-8312-fb09ac9f4f83 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:45:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 03:45:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:45:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920539502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:45:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:45:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920539502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:45:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:35 np0005539552 nova_compute[233724]: 2025-11-29 08:45:35.538 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 03:45:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 03:45:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:37.142 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:37.142 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:45:37 np0005539552 nova_compute[233724]: 2025-11-29 08:45:37.143 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:37 np0005539552 nova_compute[233724]: 2025-11-29 08:45:37.458 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 03:45:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:39.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:40 np0005539552 nova_compute[233724]: 2025-11-29 08:45:40.543 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 03:45:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.055 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.057 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.058 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.082 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.089 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.089 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.114 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.189 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.190 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.199 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.199 233728 INFO nova.compute.claims [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.319 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.382 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.459 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3959673734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.845 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.852 233728 DEBUG nova.compute.provider_tree [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.871 233728 DEBUG nova.scheduler.client.report [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.892 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.893 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.940 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.940 233728 DEBUG nova.network.neutron [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.976 233728 INFO nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:45:42 np0005539552 nova_compute[233724]: 2025-11-29 08:45:42.999 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:45:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.107 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.109 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.110 233728 INFO nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Creating image(s)#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.152 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.195 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.236 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.241 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:43.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.342 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.344 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.345 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.346 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.388 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.393 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.642 233728 DEBUG nova.policy [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:45:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.748 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:43 np0005539552 nova_compute[233724]: 2025-11-29 08:45:43.863 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:45:44 np0005539552 nova_compute[233724]: 2025-11-29 08:45:44.009 233728 DEBUG nova.objects.instance [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:44 np0005539552 nova_compute[233724]: 2025-11-29 08:45:44.046 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:45:44 np0005539552 nova_compute[233724]: 2025-11-29 08:45:44.047 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Ensure instance console log exists: /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:45:44 np0005539552 nova_compute[233724]: 2025-11-29 08:45:44.048 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:44 np0005539552 nova_compute[233724]: 2025-11-29 08:45:44.049 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:44 np0005539552 nova_compute[233724]: 2025-11-29 08:45:44.049 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:44.144 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:45:45 np0005539552 podman[318854]: 2025-11-29 08:45:45.017324118 +0000 UTC m=+0.092188163 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:45:45 np0005539552 podman[318853]: 2025-11-29 08:45:45.026492195 +0000 UTC m=+0.101398021 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 03:45:45 np0005539552 podman[318855]: 2025-11-29 08:45:45.063346837 +0000 UTC m=+0.131620984 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:45:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:45.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.515 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405930.5134096, 4208aeda-c433-4ac5-8312-fb09ac9f4f83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.516 233728 INFO nova.compute.manager [-] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.546 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.562 233728 DEBUG nova.compute.manager [None req-f1635dea-209b-45f1-9973-621c26e6cdac - - - - - -] [instance: 4208aeda-c433-4ac5-8312-fb09ac9f4f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.950 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.975 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.975 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.975 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.976 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:45:45 np0005539552 nova_compute[233724]: 2025-11-29 08:45:45.976 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.185 233728 DEBUG nova.network.neutron [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Successfully created port: fe4613b0-28a8-493f-a1ef-3390aa9230e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:45:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1832521577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.444 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.685 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.688 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4078MB free_disk=20.865798950195312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.688 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.689 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.779 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.780 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.780 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:45:46 np0005539552 nova_compute[233724]: 2025-11-29 08:45:46.854 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.050 233728 DEBUG nova.network.neutron [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Successfully updated port: fe4613b0-28a8-493f-a1ef-3390aa9230e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.070 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.070 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.071 233728 DEBUG nova.network.neutron [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:45:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.162 233728 DEBUG nova.compute.manager [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-changed-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.162 233728 DEBUG nova.compute.manager [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Refreshing instance network info cache due to event network-changed-fe4613b0-28a8-493f-a1ef-3390aa9230e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.163 233728 DEBUG oslo_concurrency.lockutils [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:47.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.313 233728 DEBUG nova.network.neutron [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:45:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2129145518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.365 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.375 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.398 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.439 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.440 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:47 np0005539552 nova_compute[233724]: 2025-11-29 08:45:47.461 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:45:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3676987304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:45:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:45:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3676987304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.738 233728 DEBUG nova.network.neutron [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updating instance_info_cache with network_info: [{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.762 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.763 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Instance network_info: |[{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.763 233728 DEBUG oslo_concurrency.lockutils [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.763 233728 DEBUG nova.network.neutron [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Refreshing network info cache for port fe4613b0-28a8-493f-a1ef-3390aa9230e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.768 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Start _get_guest_xml network_info=[{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.775 233728 WARNING nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.780 233728 DEBUG nova.virt.libvirt.host [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.781 233728 DEBUG nova.virt.libvirt.host [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.791 233728 DEBUG nova.virt.libvirt.host [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.792 233728 DEBUG nova.virt.libvirt.host [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.794 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.794 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.795 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.796 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.796 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.797 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.797 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.798 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.799 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.799 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.800 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.800 233728 DEBUG nova.virt.hardware [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:45:48 np0005539552 nova_compute[233724]: 2025-11-29 08:45:48.806 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:49.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:49.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1644570033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.379 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.420 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.426 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2428061898' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.890 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.892 233728 DEBUG nova.virt.libvirt.vif [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=210,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUCtFnN07vszlWcqLqc3OwtiaY5LGVJmT2ZWYrMbRKMkYGyWdO3eJxi7r32YSVdfdSMmHf98ntN1zt+jX0dvGmgoNoiyZKY2TvD4cve07jeq8QHwsvzbRI+YetMB/qunA==',key_name='tempest-TestSecurityGroupsBasicOps-182396490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-9omxyghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:43Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=1ba36bc8-b875-41d5-b9b4-95810b1a43d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.892 233728 DEBUG nova.network.os_vif_util [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.893 233728 DEBUG nova.network.os_vif_util [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.894 233728 DEBUG nova.objects.instance [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.926 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <uuid>1ba36bc8-b875-41d5-b9b4-95810b1a43d0</uuid>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <name>instance-000000d2</name>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105</nova:name>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:45:48</nova:creationTime>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <nova:port uuid="fe4613b0-28a8-493f-a1ef-3390aa9230e4">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <entry name="serial">1ba36bc8-b875-41d5-b9b4-95810b1a43d0</entry>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <entry name="uuid">1ba36bc8-b875-41d5-b9b4-95810b1a43d0</entry>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk.config">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:39:ab:cd"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <target dev="tapfe4613b0-28"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/console.log" append="off"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:45:49 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:45:49 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:45:49 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:45:49 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.928 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Preparing to wait for external event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.929 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.929 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.930 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.931 233728 DEBUG nova.virt.libvirt.vif [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=210,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUCtFnN07vszlWcqLqc3OwtiaY5LGVJmT2ZWYrMbRKMkYGyWdO3eJxi7r32YSVdfdSMmHf98ntN1zt+jX0dvGmgoNoiyZKY2TvD4cve07jeq8QHwsvzbRI+YetMB/qunA==',key_name='tempest-TestSecurityGroupsBasicOps-182396490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-9omxyghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:43Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=1ba36bc8-b875-41d5-b9b4-95810b1a43d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.932 233728 DEBUG nova.network.os_vif_util [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.933 233728 DEBUG nova.network.os_vif_util [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.934 233728 DEBUG os_vif [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.935 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.936 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.937 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.942 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe4613b0-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.943 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe4613b0-28, col_values=(('external_ids', {'iface-id': 'fe4613b0-28a8-493f-a1ef-3390aa9230e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:ab:cd', 'vm-uuid': '1ba36bc8-b875-41d5-b9b4-95810b1a43d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.945 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539552 NetworkManager[48926]: <info>  [1764405949.9471] manager: (tapfe4613b0-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.950 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.952 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539552 nova_compute[233724]: 2025-11-29 08:45:49.953 233728 INFO os_vif [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28')#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.033 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.034 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.036 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:39:ab:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.036 233728 INFO nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Using config drive#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.084 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.579 233728 DEBUG nova.network.neutron [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updated VIF entry in instance network info cache for port fe4613b0-28a8-493f-a1ef-3390aa9230e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.580 233728 DEBUG nova.network.neutron [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updating instance_info_cache with network_info: [{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.603 233728 DEBUG oslo_concurrency.lockutils [req-ef574411-a4d0-40cc-94fb-9e10f72db795 req-2cb393b8-56c7-48eb-941d-cbc3ade37519 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.693 233728 INFO nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Creating config drive at /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/disk.config#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.702 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfrvc3a3b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.857 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfrvc3a3b" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.903 233728 DEBUG nova.storage.rbd_utils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:50 np0005539552 nova_compute[233724]: 2025-11-29 08:45:50.908 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/disk.config 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:51.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.318 233728 DEBUG oslo_concurrency.processutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/disk.config 1ba36bc8-b875-41d5-b9b4-95810b1a43d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.319 233728 INFO nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Deleting local config drive /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:45:51 np0005539552 kernel: tapfe4613b0-28: entered promiscuous mode
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.3755] manager: (tapfe4613b0-28): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Nov 29 03:45:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:51Z|00923|binding|INFO|Claiming lport fe4613b0-28a8-493f-a1ef-3390aa9230e4 for this chassis.
Nov 29 03:45:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:51Z|00924|binding|INFO|fe4613b0-28a8-493f-a1ef-3390aa9230e4: Claiming fa:16:3e:39:ab:cd 10.100.0.7
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.378 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.383 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.389 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.394 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.3952] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.3960] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.402 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:ab:cd 10.100.0.7'], port_security=['fa:16:3e:39:ab:cd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ba36bc8-b875-41d5-b9b4-95810b1a43d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c95871d-d156-4882-b0a0-97ff36c1744a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84628c0-0159-4dba-85f8-ba5fad9cdcdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cfd6ff47-3ad6-4cdd-b1fa-0e77564ed30b, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=fe4613b0-28a8-493f-a1ef-3390aa9230e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.404 143400 INFO neutron.agent.ovn.metadata.agent [-] Port fe4613b0-28a8-493f-a1ef-3390aa9230e4 in datapath 7c95871d-d156-4882-b0a0-97ff36c1744a bound to our chassis#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.407 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c95871d-d156-4882-b0a0-97ff36c1744a#033[00m
Nov 29 03:45:51 np0005539552 systemd-machined[196379]: New machine qemu-93-instance-000000d2.
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.420 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[193de957-cf64-43a1-a101-6bf0ff0a671b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.421 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c95871d-d1 in ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.422 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c95871d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.423 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f8164173-296b-45d2-a65f-0393f7f2ba57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.424 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1eee0d6a-f6ff-4bb9-ba03-9dc6a3f202d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 systemd[1]: Started Virtual Machine qemu-93-instance-000000d2.
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.435 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[88f7324c-3738-4b17-9443-852097599165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 systemd-udevd[319155]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.4552] device (tapfe4613b0-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.4562] device (tapfe4613b0-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.466 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4879b201-7940-4d72-aac6-f824a1bf313b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.494 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fe33a0a9-6f40-4751-95eb-caa59afce489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 systemd-udevd[319158]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.5049] manager: (tap7c95871d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.504 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[52b3f8d0-9c50-46d4-9ee4-c45b78eeef56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.533 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4c3aa4-83b5-41dc-8d5a-1f96735f7319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.536 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d156a518-916c-4363-a8b7-6c1f2f8de83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.5579] device (tap7c95871d-d0): carrier: link connected
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.561 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e00dcb16-37fe-44f8-b32a-e529b400e0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.575 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5a5296-6770-4b72-bf3f-0bbc9a977b6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c95871d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9a:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899724, 'reachable_time': 31710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319186, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.590 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a974cd27-8aec-4573-b332-fc0b8d10f5aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:9ac4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 899724, 'tstamp': 899724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319187, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.606 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8546df-911e-4f0a-8865-e569ac1d7a96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c95871d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:9a:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899724, 'reachable_time': 31710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319188, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.623 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.644 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[483484a6-5d9e-4799-aba3-9ee392d1550c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:51Z|00925|binding|INFO|Setting lport fe4613b0-28a8-493f-a1ef-3390aa9230e4 ovn-installed in OVS
Nov 29 03:45:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:51Z|00926|binding|INFO|Setting lport fe4613b0-28a8-493f-a1ef-3390aa9230e4 up in Southbound
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.652 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.707 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cff0c197-5d65-4a14-83e0-66c4399443ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.708 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c95871d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.709 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.709 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c95871d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.710 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 NetworkManager[48926]: <info>  [1764405951.7117] manager: (tap7c95871d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 29 03:45:51 np0005539552 kernel: tap7c95871d-d0: entered promiscuous mode
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.714 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.715 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c95871d-d0, col_values=(('external_ids', {'iface-id': '7e0f1264-3298-4910-af53-2aeef940fbc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.716 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:51Z|00927|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.735 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.736 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c95871d-d156-4882-b0a0-97ff36c1744a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c95871d-d156-4882-b0a0-97ff36c1744a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.737 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1913bdca-8fe2-492d-bd8d-b2c1080c1fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.738 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-7c95871d-d156-4882-b0a0-97ff36c1744a
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/7c95871d-d156-4882-b0a0-97ff36c1744a.pid.haproxy
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 7c95871d-d156-4882-b0a0-97ff36c1744a
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:45:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:45:51.739 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'env', 'PROCESS_TAG=haproxy-7c95871d-d156-4882-b0a0-97ff36c1744a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c95871d-d156-4882-b0a0-97ff36c1744a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:45:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:45:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 74K writes, 299K keys, 74K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.06 MB/s#012Cumulative WAL: 74K writes, 27K syncs, 2.70 writes per sync, written: 0.30 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 47K keys, 11K commit groups, 1.0 writes per commit group, ingest: 54.15 MB, 0.09 MB/s#012Interval WAL: 11K writes, 4590 syncs, 2.57 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.872 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405951.8721623, 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.873 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.898 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.902 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405951.8723779, 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.902 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.928 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.933 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:51 np0005539552 nova_compute[233724]: 2025-11-29 08:45:51.954 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:45:52 np0005539552 podman[319262]: 2025-11-29 08:45:52.152523829 +0000 UTC m=+0.057582641 container create 620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:45:52 np0005539552 systemd[1]: Started libpod-conmon-620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38.scope.
Nov 29 03:45:52 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:45:52 np0005539552 podman[319262]: 2025-11-29 08:45:52.125113981 +0000 UTC m=+0.030172863 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:45:52 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab351d6e8f457bc873b75fb9b47bdbdaee6df46e56d1bdd68792f44516aa0bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:52 np0005539552 podman[319262]: 2025-11-29 08:45:52.233771296 +0000 UTC m=+0.138830118 container init 620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:45:52 np0005539552 podman[319262]: 2025-11-29 08:45:52.238813482 +0000 UTC m=+0.143872304 container start 620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:52 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [NOTICE]   (319282) : New worker (319284) forked
Nov 29 03:45:52 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [NOTICE]   (319282) : Loading success.
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.464 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.683 233728 DEBUG nova.compute.manager [req-95c2540f-2e32-4e83-a7c1-0e56c59ac479 req-5ac5ccde-91ad-4352-a529-3079b047b857 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.684 233728 DEBUG oslo_concurrency.lockutils [req-95c2540f-2e32-4e83-a7c1-0e56c59ac479 req-5ac5ccde-91ad-4352-a529-3079b047b857 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.684 233728 DEBUG oslo_concurrency.lockutils [req-95c2540f-2e32-4e83-a7c1-0e56c59ac479 req-5ac5ccde-91ad-4352-a529-3079b047b857 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.685 233728 DEBUG oslo_concurrency.lockutils [req-95c2540f-2e32-4e83-a7c1-0e56c59ac479 req-5ac5ccde-91ad-4352-a529-3079b047b857 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.686 233728 DEBUG nova.compute.manager [req-95c2540f-2e32-4e83-a7c1-0e56c59ac479 req-5ac5ccde-91ad-4352-a529-3079b047b857 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Processing event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.687 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.693 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764405952.692884, 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.693 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.697 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.702 233728 INFO nova.virt.libvirt.driver [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Instance spawned successfully.#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.703 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.732 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.739 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.745 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.745 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.746 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.746 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.747 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.747 233728 DEBUG nova.virt.libvirt.driver [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.782 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.842 233728 INFO nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Took 9.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.843 233728 DEBUG nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:52 np0005539552 nova_compute[233724]: 2025-11-29 08:45:52.932 233728 INFO nova.compute.manager [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Took 10.79 seconds to build instance.#033[00m
Nov 29 03:45:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:53 np0005539552 nova_compute[233724]: 2025-11-29 08:45:53.630 233728 DEBUG oslo_concurrency.lockutils [None req-b0ba8605-21c9-486a-ace2-98c27e09a547 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:54Z|00928|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.231 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.414 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.415 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.416 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:45:54 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:54Z|00929|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.730 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.793 233728 DEBUG nova.compute.manager [req-9a10294e-2848-4301-8c0d-8fea5f53e36e req-d53fab3a-effc-4aa2-a311-463d0704c381 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.794 233728 DEBUG oslo_concurrency.lockutils [req-9a10294e-2848-4301-8c0d-8fea5f53e36e req-d53fab3a-effc-4aa2-a311-463d0704c381 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.794 233728 DEBUG oslo_concurrency.lockutils [req-9a10294e-2848-4301-8c0d-8fea5f53e36e req-d53fab3a-effc-4aa2-a311-463d0704c381 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.794 233728 DEBUG oslo_concurrency.lockutils [req-9a10294e-2848-4301-8c0d-8fea5f53e36e req-d53fab3a-effc-4aa2-a311-463d0704c381 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.795 233728 DEBUG nova.compute.manager [req-9a10294e-2848-4301-8c0d-8fea5f53e36e req-d53fab3a-effc-4aa2-a311-463d0704c381 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] No waiting events found dispatching network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.795 233728 WARNING nova.compute.manager [req-9a10294e-2848-4301-8c0d-8fea5f53e36e req-d53fab3a-effc-4aa2-a311-463d0704c381 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received unexpected event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:45:54 np0005539552 nova_compute[233724]: 2025-11-29 08:45:54.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:55.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:56 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:56Z|00930|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:56 np0005539552 nova_compute[233724]: 2025-11-29 08:45:56.695 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:56 np0005539552 nova_compute[233724]: 2025-11-29 08:45:56.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:57.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:57 np0005539552 nova_compute[233724]: 2025-11-29 08:45:57.465 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:57 np0005539552 nova_compute[233724]: 2025-11-29 08:45:57.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:58 np0005539552 ovn_controller[133798]: 2025-11-29T08:45:58Z|00931|binding|INFO|Releasing lport 7e0f1264-3298-4910-af53-2aeef940fbc6 from this chassis (sb_readonly=0)
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.231 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.309 233728 DEBUG nova.compute.manager [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-changed-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.309 233728 DEBUG nova.compute.manager [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Refreshing instance network info cache due to event network-changed-fe4613b0-28a8-493f-a1ef-3390aa9230e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.310 233728 DEBUG oslo_concurrency.lockutils [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.311 233728 DEBUG oslo_concurrency.lockutils [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.311 233728 DEBUG nova.network.neutron [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Refreshing network info cache for port fe4613b0-28a8-493f-a1ef-3390aa9230e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:58 np0005539552 nova_compute[233724]: 2025-11-29 08:45:58.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:45:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:59.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:59 np0005539552 nova_compute[233724]: 2025-11-29 08:45:59.949 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.443 233728 DEBUG nova.compute.manager [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-changed-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.444 233728 DEBUG nova.compute.manager [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Refreshing instance network info cache due to event network-changed-fe4613b0-28a8-493f-a1ef-3390aa9230e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.444 233728 DEBUG oslo_concurrency.lockutils [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.576 233728 DEBUG nova.network.neutron [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updated VIF entry in instance network info cache for port fe4613b0-28a8-493f-a1ef-3390aa9230e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.577 233728 DEBUG nova.network.neutron [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updating instance_info_cache with network_info: [{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.608 233728 DEBUG oslo_concurrency.lockutils [req-8790459d-668b-4b02-8cc7-1a3016256ac0 req-cda6c1d7-941e-414c-a497-61651750e46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.609 233728 DEBUG oslo_concurrency.lockutils [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.609 233728 DEBUG nova.network.neutron [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Refreshing network info cache for port fe4613b0-28a8-493f-a1ef-3390aa9230e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:46:00 np0005539552 nova_compute[233724]: 2025-11-29 08:46:00.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.468 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.744 233728 DEBUG nova.network.neutron [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updated VIF entry in instance network info cache for port fe4613b0-28a8-493f-a1ef-3390aa9230e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.745 233728 DEBUG nova.network.neutron [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updating instance_info_cache with network_info: [{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.764 233728 DEBUG oslo_concurrency.lockutils [req-98ca5b10-bd01-48ca-b53d-465715805bd9 req-2224b1bb-6ba7-472e-83f5-e2574ceb5b4d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:46:02 np0005539552 nova_compute[233724]: 2025-11-29 08:46:02.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:46:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:03 np0005539552 nova_compute[233724]: 2025-11-29 08:46:03.164 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:03 np0005539552 nova_compute[233724]: 2025-11-29 08:46:03.165 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:03 np0005539552 nova_compute[233724]: 2025-11-29 08:46:03.165 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:46:03 np0005539552 nova_compute[233724]: 2025-11-29 08:46:03.166 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:03 np0005539552 nova_compute[233724]: 2025-11-29 08:46:03.967 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.631 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updating instance_info_cache with network_info: [{"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.647 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-1ba36bc8-b875-41d5-b9b4-95810b1a43d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.647 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.647 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.772 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.789 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Triggering sync for uuid 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.789 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.789 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.828 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:04 np0005539552 nova_compute[233724]: 2025-11-29 08:46:04.952 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:06 np0005539552 nova_compute[233724]: 2025-11-29 08:46:06.169 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:46:06Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:ab:cd 10.100.0.7
Nov 29 03:46:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:46:06Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:ab:cd 10.100.0.7
Nov 29 03:46:06 np0005539552 nova_compute[233724]: 2025-11-29 08:46:06.835 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:07.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:07 np0005539552 nova_compute[233724]: 2025-11-29 08:46:07.470 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:09 np0005539552 nova_compute[233724]: 2025-11-29 08:46:09.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:11.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.472 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.784 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.785 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.785 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.786 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.787 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.789 233728 INFO nova.compute.manager [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Terminating instance#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.791 233728 DEBUG nova.compute.manager [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:46:12 np0005539552 kernel: tapfe4613b0-28 (unregistering): left promiscuous mode
Nov 29 03:46:12 np0005539552 NetworkManager[48926]: <info>  [1764405972.8551] device (tapfe4613b0-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:46:12Z|00932|binding|INFO|Releasing lport fe4613b0-28a8-493f-a1ef-3390aa9230e4 from this chassis (sb_readonly=0)
Nov 29 03:46:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:46:12Z|00933|binding|INFO|Setting lport fe4613b0-28a8-493f-a1ef-3390aa9230e4 down in Southbound
Nov 29 03:46:12 np0005539552 ovn_controller[133798]: 2025-11-29T08:46:12Z|00934|binding|INFO|Removing iface tapfe4613b0-28 ovn-installed in OVS
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:12.877 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:ab:cd 10.100.0.7', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1ba36bc8-b875-41d5-b9b4-95810b1a43d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c95871d-d156-4882-b0a0-97ff36c1744a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cfd6ff47-3ad6-4cdd-b1fa-0e77564ed30b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=fe4613b0-28a8-493f-a1ef-3390aa9230e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:12.879 143400 INFO neutron.agent.ovn.metadata.agent [-] Port fe4613b0-28a8-493f-a1ef-3390aa9230e4 in datapath 7c95871d-d156-4882-b0a0-97ff36c1744a unbound from our chassis#033[00m
Nov 29 03:46:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:12.881 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c95871d-d156-4882-b0a0-97ff36c1744a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:46:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:12.882 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fc66907c-0f30-4cb7-a084-34c96bd19071]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:12 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:12.884 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a namespace which is not needed anymore#033[00m
Nov 29 03:46:12 np0005539552 nova_compute[233724]: 2025-11-29 08:46:12.902 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:12 np0005539552 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Nov 29 03:46:12 np0005539552 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d2.scope: Consumed 14.088s CPU time.
Nov 29 03:46:12 np0005539552 systemd-machined[196379]: Machine qemu-93-instance-000000d2 terminated.
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.037 233728 INFO nova.virt.libvirt.driver [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Instance destroyed successfully.#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.038 233728 DEBUG nova.objects.instance [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:13 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [NOTICE]   (319282) : haproxy version is 2.8.14-c23fe91
Nov 29 03:46:13 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [NOTICE]   (319282) : path to executable is /usr/sbin/haproxy
Nov 29 03:46:13 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [WARNING]  (319282) : Exiting Master process...
Nov 29 03:46:13 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [WARNING]  (319282) : Exiting Master process...
Nov 29 03:46:13 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [ALERT]    (319282) : Current worker (319284) exited with code 143 (Terminated)
Nov 29 03:46:13 np0005539552 neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a[319278]: [WARNING]  (319282) : All workers exited. Exiting... (0)
Nov 29 03:46:13 np0005539552 systemd[1]: libpod-620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38.scope: Deactivated successfully.
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.050 233728 DEBUG nova.virt.libvirt.vif [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-426634105',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=210,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUCtFnN07vszlWcqLqc3OwtiaY5LGVJmT2ZWYrMbRKMkYGyWdO3eJxi7r32YSVdfdSMmHf98ntN1zt+jX0dvGmgoNoiyZKY2TvD4cve07jeq8QHwsvzbRI+YetMB/qunA==',key_name='tempest-TestSecurityGroupsBasicOps-182396490',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:45:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-9omxyghe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:52Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=1ba36bc8-b875-41d5-b9b4-95810b1a43d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.051 233728 DEBUG nova.network.os_vif_util [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "address": "fa:16:3e:39:ab:cd", "network": {"id": "7c95871d-d156-4882-b0a0-97ff36c1744a", "bridge": "br-int", "label": "tempest-network-smoke--1036653926", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe4613b0-28", "ovs_interfaceid": "fe4613b0-28a8-493f-a1ef-3390aa9230e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.052 233728 DEBUG nova.network.os_vif_util [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.053 233728 DEBUG os_vif [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.055 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.055 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe4613b0-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:13 np0005539552 podman[319384]: 2025-11-29 08:46:13.059496742 +0000 UTC m=+0.056550083 container died 620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.057 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.059 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.064 233728 INFO os_vif [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:ab:cd,bridge_name='br-int',has_traffic_filtering=True,id=fe4613b0-28a8-493f-a1ef-3390aa9230e4,network=Network(7c95871d-d156-4882-b0a0-97ff36c1744a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe4613b0-28')#033[00m
Nov 29 03:46:13 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38-userdata-shm.mount: Deactivated successfully.
Nov 29 03:46:13 np0005539552 systemd[1]: var-lib-containers-storage-overlay-dab351d6e8f457bc873b75fb9b47bdbdaee6df46e56d1bdd68792f44516aa0bc-merged.mount: Deactivated successfully.
Nov 29 03:46:13 np0005539552 podman[319384]: 2025-11-29 08:46:13.102759546 +0000 UTC m=+0.099812867 container cleanup 620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:46:13 np0005539552 systemd[1]: libpod-conmon-620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38.scope: Deactivated successfully.
Nov 29 03:46:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:13.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:13 np0005539552 podman[319441]: 2025-11-29 08:46:13.188234617 +0000 UTC m=+0.058350382 container remove 620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.199 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9bb4fc-50c8-4af2-ab74-1f5d0298839c]: (4, ('Sat Nov 29 08:46:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a (620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38)\n620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38\nSat Nov 29 08:46:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a (620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38)\n620e406afa232a34074e06d5d42ee5c3de23fc52dcd9755bca5144b4bd040a38\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.202 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[16ab2558-9a00-4cd5-9646-30ec17628978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.204 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c95871d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.208 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:13 np0005539552 kernel: tap7c95871d-d0: left promiscuous mode
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.228 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.233 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c6544005-30de-45a8-84ce-fc527d05ec17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.250 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[022ce5a0-2ed4-45ed-acb8-acb918c52df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.251 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[722007a8-0040-49bc-bf70-b48b84b6bf8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.275 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dafa15-353c-45ae-93e1-ec7556a9e2ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 899717, 'reachable_time': 18488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319455, 'error': None, 'target': 'ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.278 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c95871d-d156-4882-b0a0-97ff36c1744a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:46:13 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:13.279 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[b1db1f36-45ca-49d2-b37a-fc81e26c08c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:13 np0005539552 systemd[1]: run-netns-ovnmeta\x2d7c95871d\x2dd156\x2d4882\x2db0a0\x2d97ff36c1744a.mount: Deactivated successfully.
Nov 29 03:46:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.578 233728 INFO nova.virt.libvirt.driver [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Deleting instance files /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0_del#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.579 233728 INFO nova.virt.libvirt.driver [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Deletion of /var/lib/nova/instances/1ba36bc8-b875-41d5-b9b4-95810b1a43d0_del complete#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.629 233728 INFO nova.compute.manager [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.629 233728 DEBUG oslo.service.loopingcall [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.630 233728 DEBUG nova.compute.manager [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.630 233728 DEBUG nova.network.neutron [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.703 233728 DEBUG nova.compute.manager [req-183b9285-5855-4a84-b6f9-ff01e8fceacd req-d89162db-8868-4928-9a41-44b32888bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-vif-unplugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.704 233728 DEBUG oslo_concurrency.lockutils [req-183b9285-5855-4a84-b6f9-ff01e8fceacd req-d89162db-8868-4928-9a41-44b32888bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.704 233728 DEBUG oslo_concurrency.lockutils [req-183b9285-5855-4a84-b6f9-ff01e8fceacd req-d89162db-8868-4928-9a41-44b32888bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.705 233728 DEBUG oslo_concurrency.lockutils [req-183b9285-5855-4a84-b6f9-ff01e8fceacd req-d89162db-8868-4928-9a41-44b32888bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.705 233728 DEBUG nova.compute.manager [req-183b9285-5855-4a84-b6f9-ff01e8fceacd req-d89162db-8868-4928-9a41-44b32888bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] No waiting events found dispatching network-vif-unplugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:13 np0005539552 nova_compute[233724]: 2025-11-29 08:46:13.705 233728 DEBUG nova.compute.manager [req-183b9285-5855-4a84-b6f9-ff01e8fceacd req-d89162db-8868-4928-9a41-44b32888bc7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-vif-unplugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:46:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.188 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.503 233728 DEBUG nova.network.neutron [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.535 233728 INFO nova.compute.manager [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.597 233728 DEBUG nova.compute.manager [req-79eeb39c-d750-4b29-9ac1-3688e11b7c9c req-e9a45d5f-156b-4bd0-a2a8-2a2abb0ccd95 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-vif-deleted-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.605 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.605 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:14 np0005539552 nova_compute[233724]: 2025-11-29 08:46:14.674 233728 DEBUG oslo_concurrency.processutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:15 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3902951159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.160 233728 DEBUG oslo_concurrency.processutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.169 233728 DEBUG nova.compute.provider_tree [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.194 233728 DEBUG nova.scheduler.client.report [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.220 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.249 233728 INFO nova.scheduler.client.report [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance 1ba36bc8-b875-41d5-b9b4-95810b1a43d0#033[00m
Nov 29 03:46:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:15.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.340 233728 DEBUG oslo_concurrency.lockutils [None req-d4ae4803-4045-4b1a-9f20-b5bec85d42a3 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.978 233728 DEBUG nova.compute.manager [req-de8b95c6-9803-48c5-90b4-742a94c4d618 req-dbbdaee6-267a-4c96-b38c-58b803a8184b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.979 233728 DEBUG oslo_concurrency.lockutils [req-de8b95c6-9803-48c5-90b4-742a94c4d618 req-dbbdaee6-267a-4c96-b38c-58b803a8184b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.979 233728 DEBUG oslo_concurrency.lockutils [req-de8b95c6-9803-48c5-90b4-742a94c4d618 req-dbbdaee6-267a-4c96-b38c-58b803a8184b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.979 233728 DEBUG oslo_concurrency.lockutils [req-de8b95c6-9803-48c5-90b4-742a94c4d618 req-dbbdaee6-267a-4c96-b38c-58b803a8184b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1ba36bc8-b875-41d5-b9b4-95810b1a43d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.979 233728 DEBUG nova.compute.manager [req-de8b95c6-9803-48c5-90b4-742a94c4d618 req-dbbdaee6-267a-4c96-b38c-58b803a8184b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] No waiting events found dispatching network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:15 np0005539552 nova_compute[233724]: 2025-11-29 08:46:15.980 233728 WARNING nova.compute.manager [req-de8b95c6-9803-48c5-90b4-742a94c4d618 req-dbbdaee6-267a-4c96-b38c-58b803a8184b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Received unexpected event network-vif-plugged-fe4613b0-28a8-493f-a1ef-3390aa9230e4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:46:16 np0005539552 podman[319481]: 2025-11-29 08:46:16.005370717 +0000 UTC m=+0.085689448 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:16 np0005539552 podman[319480]: 2025-11-29 08:46:16.009290672 +0000 UTC m=+0.085045600 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:46:16 np0005539552 podman[319482]: 2025-11-29 08:46:16.02554577 +0000 UTC m=+0.102060928 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:46:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:17.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:17 np0005539552 nova_compute[233724]: 2025-11-29 08:46:17.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539552 nova_compute[233724]: 2025-11-29 08:46:17.640 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539552 nova_compute[233724]: 2025-11-29 08:46:18.058 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:19.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:20.652 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:20.653 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:20.653 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:21.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:21.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:22 np0005539552 nova_compute[233724]: 2025-11-29 08:46:22.477 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:23 np0005539552 nova_compute[233724]: 2025-11-29 08:46:23.061 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:23.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:24 np0005539552 nova_compute[233724]: 2025-11-29 08:46:24.639 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:24 np0005539552 nova_compute[233724]: 2025-11-29 08:46:24.852 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:25.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:27.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:27.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539552 nova_compute[233724]: 2025-11-29 08:46:27.479 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:28 np0005539552 nova_compute[233724]: 2025-11-29 08:46:28.036 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405973.0342255, 1ba36bc8-b875-41d5-b9b4-95810b1a43d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:28 np0005539552 nova_compute[233724]: 2025-11-29 08:46:28.037 233728 INFO nova.compute.manager [-] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539552 nova_compute[233724]: 2025-11-29 08:46:28.064 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:29.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:29.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:30 np0005539552 nova_compute[233724]: 2025-11-29 08:46:30.543 233728 DEBUG nova.compute.manager [None req-8a4e62ee-2e0f-4c6b-958f-f014bcb35f6c - - - - - -] [instance: 1ba36bc8-b875-41d5-b9b4-95810b1a43d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:31.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:32 np0005539552 nova_compute[233724]: 2025-11-29 08:46:32.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:33 np0005539552 nova_compute[233724]: 2025-11-29 08:46:33.067 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:33.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:33.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:35.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:35.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:37.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:37.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:37 np0005539552 nova_compute[233724]: 2025-11-29 08:46:37.484 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:38 np0005539552 nova_compute[233724]: 2025-11-29 08:46:38.069 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:46:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099110687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:46:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:46:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1868373205' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:46:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:46:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1868373205' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:46:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:39.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:39.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:40 np0005539552 nova_compute[233724]: 2025-11-29 08:46:40.134 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:40.136 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:40 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:40.137 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:42 np0005539552 nova_compute[233724]: 2025-11-29 08:46:42.486 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539552 nova_compute[233724]: 2025-11-29 08:46:43.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:45 np0005539552 nova_compute[233724]: 2025-11-29 08:46:45.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:45 np0005539552 nova_compute[233724]: 2025-11-29 08:46:45.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:45 np0005539552 nova_compute[233724]: 2025-11-29 08:46:45.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:45 np0005539552 nova_compute[233724]: 2025-11-29 08:46:45.967 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:45 np0005539552 nova_compute[233724]: 2025-11-29 08:46:45.967 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:46:45 np0005539552 nova_compute[233724]: 2025-11-29 08:46:45.968 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2909315477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.419 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:46 np0005539552 podman[319934]: 2025-11-29 08:46:46.561609733 +0000 UTC m=+0.091047192 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:46:46 np0005539552 podman[319935]: 2025-11-29 08:46:46.570154083 +0000 UTC m=+0.101096152 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:46:46 np0005539552 podman[319936]: 2025-11-29 08:46:46.634474704 +0000 UTC m=+0.149637579 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.639 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.640 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4139MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.641 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.641 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.810 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.810 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.831 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.852 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.853 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.866 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.889 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:46:46 np0005539552 nova_compute[233724]: 2025-11-29 08:46:46.911 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:47.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:47.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/155322200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:47 np0005539552 nova_compute[233724]: 2025-11-29 08:46:47.349 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:47 np0005539552 nova_compute[233724]: 2025-11-29 08:46:47.357 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:47 np0005539552 nova_compute[233724]: 2025-11-29 08:46:47.369 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:47 np0005539552 nova_compute[233724]: 2025-11-29 08:46:47.396 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:46:47 np0005539552 nova_compute[233724]: 2025-11-29 08:46:47.397 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:47 np0005539552 nova_compute[233724]: 2025-11-29 08:46:47.488 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539552 nova_compute[233724]: 2025-11-29 08:46:48.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:49 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:46:49.139 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:49.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:49.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:51.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:51.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:52 np0005539552 nova_compute[233724]: 2025-11-29 08:46:52.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.077 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.693 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.694 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.715 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.815 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.815 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.821 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.821 233728 INFO nova.compute.claims [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:46:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:53 np0005539552 nova_compute[233724]: 2025-11-29 08:46:53.951 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/795802068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.535 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.543 233728 DEBUG nova.compute.provider_tree [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.558 233728 DEBUG nova.scheduler.client.report [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.581 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.582 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.630 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.630 233728 DEBUG nova.network.neutron [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.656 233728 INFO nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.670 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.799 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.801 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.802 233728 INFO nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Creating image(s)#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.841 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.886 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.922 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:54 np0005539552 nova_compute[233724]: 2025-11-29 08:46:54.926 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.010 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.012 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.013 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.014 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.056 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.061 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 81f8ea10-4440-4354-acb8-7c0026e214f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.365 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 81f8ea10-4440-4354-acb8-7c0026e214f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.398 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.398 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.399 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.441 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.483 233728 DEBUG nova.policy [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.557 233728 DEBUG nova.objects.instance [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid 81f8ea10-4440-4354-acb8-7c0026e214f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.570 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.570 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Ensure instance console log exists: /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.571 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.571 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:55 np0005539552 nova_compute[233724]: 2025-11-29 08:46:55.572 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:57 np0005539552 nova_compute[233724]: 2025-11-29 08:46:57.264 233728 DEBUG nova.network.neutron [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Successfully created port: 976a5775-a543-44c7-9edd-4356ec9e3e5f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:46:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:57 np0005539552 nova_compute[233724]: 2025-11-29 08:46:57.492 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:57 np0005539552 nova_compute[233724]: 2025-11-29 08:46:57.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.079 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.518 233728 DEBUG nova.network.neutron [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Successfully updated port: 976a5775-a543-44c7-9edd-4356ec9e3e5f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.534 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.534 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.535 233728 DEBUG nova.network.neutron [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.632 233728 DEBUG nova.compute.manager [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-changed-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.633 233728 DEBUG nova.compute.manager [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Refreshing instance network info cache due to event network-changed-976a5775-a543-44c7-9edd-4356ec9e3e5f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.633 233728 DEBUG oslo_concurrency.lockutils [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:58 np0005539552 nova_compute[233724]: 2025-11-29 08:46:58.782 233728 DEBUG nova.network.neutron [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:46:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:46:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:59 np0005539552 nova_compute[233724]: 2025-11-29 08:46:59.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:59 np0005539552 nova_compute[233724]: 2025-11-29 08:46:59.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.538 233728 DEBUG nova.network.neutron [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updating instance_info_cache with network_info: [{"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.572 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.573 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Instance network_info: |[{"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.573 233728 DEBUG oslo_concurrency.lockutils [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.574 233728 DEBUG nova.network.neutron [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Refreshing network info cache for port 976a5775-a543-44c7-9edd-4356ec9e3e5f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.577 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Start _get_guest_xml network_info=[{"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'encrypted': False, 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.583 233728 WARNING nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.587 233728 DEBUG nova.virt.libvirt.host [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.588 233728 DEBUG nova.virt.libvirt.host [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.595 233728 DEBUG nova.virt.libvirt.host [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.596 233728 DEBUG nova.virt.libvirt.host [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.597 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.598 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.598 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.598 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.599 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.599 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.599 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.599 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.600 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.600 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.600 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.601 233728 DEBUG nova.virt.hardware [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:47:00 np0005539552 nova_compute[233724]: 2025-11-29 08:47:00.604 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1144153927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.072 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.114 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.121 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:01.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/588040058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.550 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.551 233728 DEBUG nova.virt.libvirt.vif [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=213,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIqztyNxcFNGX+Jw/TzqSaf7IKcldRL0GvIZInM3adtkD/2hkuxV9cfJutZL0j7me7Di9qNueMjWPCgJZ95kQGG++Pk0vOKucPX3FrXhBYJOxZ/WNUp453mg0OmkvApWRQ==',key_name='tempest-TestSecurityGroupsBasicOps-1327998220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-0eftkbzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:54Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=81f8ea10-4440-4354-acb8-7c0026e214f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.552 233728 DEBUG nova.network.os_vif_util [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.553 233728 DEBUG nova.network.os_vif_util [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.555 233728 DEBUG nova.objects.instance [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81f8ea10-4440-4354-acb8-7c0026e214f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.572 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <uuid>81f8ea10-4440-4354-acb8-7c0026e214f2</uuid>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <name>instance-000000d5</name>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980</nova:name>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:47:00</nova:creationTime>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <nova:port uuid="976a5775-a543-44c7-9edd-4356ec9e3e5f">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <entry name="serial">81f8ea10-4440-4354-acb8-7c0026e214f2</entry>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <entry name="uuid">81f8ea10-4440-4354-acb8-7c0026e214f2</entry>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/81f8ea10-4440-4354-acb8-7c0026e214f2_disk">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/81f8ea10-4440-4354-acb8-7c0026e214f2_disk.config">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:1c:2e:f0"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <target dev="tap976a5775-a5"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/console.log" append="off"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:47:01 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:47:01 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:47:01 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:47:01 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.574 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Preparing to wait for external event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.574 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.575 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.575 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.577 233728 DEBUG nova.virt.libvirt.vif [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=213,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIqztyNxcFNGX+Jw/TzqSaf7IKcldRL0GvIZInM3adtkD/2hkuxV9cfJutZL0j7me7Di9qNueMjWPCgJZ95kQGG++Pk0vOKucPX3FrXhBYJOxZ/WNUp453mg0OmkvApWRQ==',key_name='tempest-TestSecurityGroupsBasicOps-1327998220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-0eftkbzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:54Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=81f8ea10-4440-4354-acb8-7c0026e214f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.577 233728 DEBUG nova.network.os_vif_util [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.578 233728 DEBUG nova.network.os_vif_util [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.579 233728 DEBUG os_vif [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.580 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.581 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.581 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.586 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.587 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap976a5775-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.588 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap976a5775-a5, col_values=(('external_ids', {'iface-id': '976a5775-a543-44c7-9edd-4356ec9e3e5f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:2e:f0', 'vm-uuid': '81f8ea10-4440-4354-acb8-7c0026e214f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.590 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:01 np0005539552 NetworkManager[48926]: <info>  [1764406021.5912] manager: (tap976a5775-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.594 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.599 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.601 233728 INFO os_vif [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5')#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.655 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.656 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.656 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:1c:2e:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.657 233728 INFO nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Using config drive#033[00m
Nov 29 03:47:01 np0005539552 nova_compute[233724]: 2025-11-29 08:47:01.693 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.494 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.780 233728 INFO nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Creating config drive at /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/disk.config#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.789 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9nazm7kk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.842 233728 DEBUG nova.network.neutron [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updated VIF entry in instance network info cache for port 976a5775-a543-44c7-9edd-4356ec9e3e5f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.843 233728 DEBUG nova.network.neutron [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updating instance_info_cache with network_info: [{"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.865 233728 DEBUG oslo_concurrency.lockutils [req-3088b422-ee1e-405a-b969-d364b954a5b7 req-2733664f-fab8-4496-875d-3ca521c4383b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.946 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9nazm7kk" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.989 233728 DEBUG nova.storage.rbd_utils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 81f8ea10-4440-4354-acb8-7c0026e214f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:02 np0005539552 nova_compute[233724]: 2025-11-29 08:47:02.994 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/disk.config 81f8ea10-4440-4354-acb8-7c0026e214f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.231 233728 DEBUG oslo_concurrency.processutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/disk.config 81f8ea10-4440-4354-acb8-7c0026e214f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.232 233728 INFO nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Deleting local config drive /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:47:03 np0005539552 kernel: tap976a5775-a5: entered promiscuous mode
Nov 29 03:47:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:03Z|00935|binding|INFO|Claiming lport 976a5775-a543-44c7-9edd-4356ec9e3e5f for this chassis.
Nov 29 03:47:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:03Z|00936|binding|INFO|976a5775-a543-44c7-9edd-4356ec9e3e5f: Claiming fa:16:3e:1c:2e:f0 10.100.0.10
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.312 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 NetworkManager[48926]: <info>  [1764406023.3138] manager: (tap976a5775-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.322 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.326 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.337 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:2e:f0 10.100.0.10'], port_security=['fa:16:3e:1c:2e:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '81f8ea10-4440-4354-acb8-7c0026e214f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21923162-fe0c-4f50-88b5-19d1d684fafc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8aeea424-b712-4a7c-90b2-3d14349e7c5b fc98692a-6001-46c7-9683-7744fbf55ac5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdba913e-28a6-4e48-bfcc-da07411da078, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=976a5775-a543-44c7-9edd-4356ec9e3e5f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.339 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 976a5775-a543-44c7-9edd-4356ec9e3e5f in datapath 21923162-fe0c-4f50-88b5-19d1d684fafc bound to our chassis#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.342 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21923162-fe0c-4f50-88b5-19d1d684fafc#033[00m
Nov 29 03:47:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:03.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:03 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:47:03 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.361 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[754aee97-1dc6-4bce-9d98-cbc6c2cea840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.362 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21923162-f1 in ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.365 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21923162-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.365 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[65c63e98-21e9-4cab-9b1e-b2b923efb662]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.367 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3b29ca-2e2d-47c6-8771-5dc28a148202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 systemd-machined[196379]: New machine qemu-94-instance-000000d5.
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.387 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd8ec09-4a85-49dc-9762-0d41458ff61e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.388 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:03Z|00937|binding|INFO|Setting lport 976a5775-a543-44c7-9edd-4356ec9e3e5f ovn-installed in OVS
Nov 29 03:47:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:03Z|00938|binding|INFO|Setting lport 976a5775-a543-44c7-9edd-4356ec9e3e5f up in Southbound
Nov 29 03:47:03 np0005539552 systemd[1]: Started Virtual Machine qemu-94-instance-000000d5.
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.391 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 systemd-udevd[320414]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:03 np0005539552 NetworkManager[48926]: <info>  [1764406023.4296] device (tap976a5775-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.427 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5dd27d-7544-43b1-8f35-c76226b56ef4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 NetworkManager[48926]: <info>  [1764406023.4304] device (tap976a5775-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.474 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[07c6ee67-4a93-4230-aad9-79cd75e7f45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.480 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5fec45-76f2-4b09-9518-7916e249a1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 NetworkManager[48926]: <info>  [1764406023.4820] manager: (tap21923162-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Nov 29 03:47:03 np0005539552 systemd-udevd[320416]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.525 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[ee99582a-707a-499d-9a91-336b267d7342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.530 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cdabc3-1f4e-40b7-b4be-de2a36e99385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 NetworkManager[48926]: <info>  [1764406023.5642] device (tap21923162-f0): carrier: link connected
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.572 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[4a932245-4c86-4e57-8b07-4e565b7f799b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.599 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[0b792ec7-d5ea-47e1-b825-e1253c922168]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21923162-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:39:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906925, 'reachable_time': 18290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320444, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.627 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[878866fa-0d6a-4cf2-9c44-1d98c8fae139]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:399b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 906925, 'tstamp': 906925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320445, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.655 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ca37ab37-463a-40df-8c2a-29ed53f9411c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21923162-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:39:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906925, 'reachable_time': 18290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320446, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.701 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7df8f0-aba8-4d40-8cfa-cffe1872b2d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.785 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fdae75a0-cf02-41c3-9071-21d13e0870eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.787 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21923162-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.787 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.788 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21923162-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:03 np0005539552 NetworkManager[48926]: <info>  [1764406023.7903] manager: (tap21923162-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 29 03:47:03 np0005539552 kernel: tap21923162-f0: entered promiscuous mode
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.789 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.792 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21923162-f0, col_values=(('external_ids', {'iface-id': '19e88d44-d740-439a-9106-0dfa712bfe7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:03Z|00939|binding|INFO|Releasing lport 19e88d44-d740-439a-9106-0dfa712bfe7d from this chassis (sb_readonly=0)
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.795 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21923162-fe0c-4f50-88b5-19d1d684fafc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21923162-fe0c-4f50-88b5-19d1d684fafc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.805 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[4849725f-3353-48f7-b59d-33e8588439d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.805 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-21923162-fe0c-4f50-88b5-19d1d684fafc
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/21923162-fe0c-4f50-88b5-19d1d684fafc.pid.haproxy
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 21923162-fe0c-4f50-88b5-19d1d684fafc
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:47:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:03.807 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'env', 'PROCESS_TAG=haproxy-21923162-fe0c-4f50-88b5-19d1d684fafc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21923162-fe0c-4f50-88b5-19d1d684fafc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.810 233728 DEBUG nova.compute.manager [req-24fe0dc4-0820-4d3e-94fc-d5c97c96d234 req-ee6859dd-1a4f-4ffd-9622-d9a70bdd3060 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.811 233728 DEBUG oslo_concurrency.lockutils [req-24fe0dc4-0820-4d3e-94fc-d5c97c96d234 req-ee6859dd-1a4f-4ffd-9622-d9a70bdd3060 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.812 233728 DEBUG oslo_concurrency.lockutils [req-24fe0dc4-0820-4d3e-94fc-d5c97c96d234 req-ee6859dd-1a4f-4ffd-9622-d9a70bdd3060 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.813 233728 DEBUG oslo_concurrency.lockutils [req-24fe0dc4-0820-4d3e-94fc-d5c97c96d234 req-ee6859dd-1a4f-4ffd-9622-d9a70bdd3060 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.813 233728 DEBUG nova.compute.manager [req-24fe0dc4-0820-4d3e-94fc-d5c97c96d234 req-ee6859dd-1a4f-4ffd-9622-d9a70bdd3060 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Processing event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.814 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.943 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:47:03 np0005539552 nova_compute[233724]: 2025-11-29 08:47:03.943 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:47:04 np0005539552 podman[320478]: 2025-11-29 08:47:04.174761513 +0000 UTC m=+0.054575920 container create ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:47:04 np0005539552 systemd[1]: Started libpod-conmon-ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861.scope.
Nov 29 03:47:04 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:47:04 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150682837b8f98102a5d6b265b41765ab31da735d2c1c62fe3bdb3e725d48b72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:04 np0005539552 podman[320478]: 2025-11-29 08:47:04.148078564 +0000 UTC m=+0.027893021 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:47:04 np0005539552 podman[320478]: 2025-11-29 08:47:04.248650102 +0000 UTC m=+0.128464539 container init ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:47:04 np0005539552 podman[320478]: 2025-11-29 08:47:04.257457399 +0000 UTC m=+0.137271806 container start ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:47:04 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [NOTICE]   (320497) : New worker (320499) forked
Nov 29 03:47:04 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [NOTICE]   (320497) : Loading success.
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.171 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.173 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406025.1724064, 81f8ea10-4440-4354-acb8-7c0026e214f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.174 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.178 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.183 233728 INFO nova.virt.libvirt.driver [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Instance spawned successfully.#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.183 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:47:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.205 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.215 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.221 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.222 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.223 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.224 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.225 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.226 233728 DEBUG nova.virt.libvirt.driver [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.255 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.256 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406025.17269, 81f8ea10-4440-4354-acb8-7c0026e214f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.257 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.301 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.308 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406025.1751783, 81f8ea10-4440-4354-acb8-7c0026e214f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.308 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.318 233728 INFO nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Took 10.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.318 233728 DEBUG nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.329 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.334 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:05.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.435 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.464 233728 INFO nova.compute.manager [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Took 11.68 seconds to build instance.#033[00m
Nov 29 03:47:05 np0005539552 nova_compute[233724]: 2025-11-29 08:47:05.490 233728 DEBUG oslo_concurrency.lockutils [None req-340c6413-c9b2-46dc-9711-e48e7a9c3b16 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.116 233728 DEBUG nova.compute.manager [req-01273256-b301-4146-a672-61508e8b781c req-9fcbad4f-3cab-4ae9-bc95-f234ce659277 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.116 233728 DEBUG oslo_concurrency.lockutils [req-01273256-b301-4146-a672-61508e8b781c req-9fcbad4f-3cab-4ae9-bc95-f234ce659277 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.117 233728 DEBUG oslo_concurrency.lockutils [req-01273256-b301-4146-a672-61508e8b781c req-9fcbad4f-3cab-4ae9-bc95-f234ce659277 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.117 233728 DEBUG oslo_concurrency.lockutils [req-01273256-b301-4146-a672-61508e8b781c req-9fcbad4f-3cab-4ae9-bc95-f234ce659277 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.118 233728 DEBUG nova.compute.manager [req-01273256-b301-4146-a672-61508e8b781c req-9fcbad4f-3cab-4ae9-bc95-f234ce659277 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] No waiting events found dispatching network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.118 233728 WARNING nova.compute.manager [req-01273256-b301-4146-a672-61508e8b781c req-9fcbad4f-3cab-4ae9-bc95-f234ce659277 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received unexpected event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:47:06 np0005539552 nova_compute[233724]: 2025-11-29 08:47:06.591 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:07.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:07.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:07 np0005539552 nova_compute[233724]: 2025-11-29 08:47:07.496 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:47:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:47:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:09.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:09 np0005539552 NetworkManager[48926]: <info>  [1764406029.7949] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Nov 29 03:47:09 np0005539552 NetworkManager[48926]: <info>  [1764406029.7957] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Nov 29 03:47:09 np0005539552 nova_compute[233724]: 2025-11-29 08:47:09.794 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:09 np0005539552 nova_compute[233724]: 2025-11-29 08:47:09.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:09 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:09Z|00940|binding|INFO|Releasing lport 19e88d44-d740-439a-9106-0dfa712bfe7d from this chassis (sb_readonly=0)
Nov 29 03:47:09 np0005539552 nova_compute[233724]: 2025-11-29 08:47:09.958 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539552 nova_compute[233724]: 2025-11-29 08:47:10.122 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539552 nova_compute[233724]: 2025-11-29 08:47:10.197 233728 DEBUG nova.compute.manager [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-changed-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:10 np0005539552 nova_compute[233724]: 2025-11-29 08:47:10.198 233728 DEBUG nova.compute.manager [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Refreshing instance network info cache due to event network-changed-976a5775-a543-44c7-9edd-4356ec9e3e5f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:10 np0005539552 nova_compute[233724]: 2025-11-29 08:47:10.198 233728 DEBUG oslo_concurrency.lockutils [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:10 np0005539552 nova_compute[233724]: 2025-11-29 08:47:10.198 233728 DEBUG oslo_concurrency.lockutils [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:10 np0005539552 nova_compute[233724]: 2025-11-29 08:47:10.198 233728 DEBUG nova.network.neutron [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Refreshing network info cache for port 976a5775-a543-44c7-9edd-4356ec9e3e5f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.661171) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030661283, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 258, "total_data_size": 4930071, "memory_usage": 5015848, "flush_reason": "Manual Compaction"}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030677792, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 2049691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68268, "largest_seqno": 70431, "table_properties": {"data_size": 2042770, "index_size": 3674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18455, "raw_average_key_size": 21, "raw_value_size": 2027416, "raw_average_value_size": 2376, "num_data_blocks": 161, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405867, "oldest_key_time": 1764405867, "file_creation_time": 1764406030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 16727 microseconds, and 8999 cpu microseconds.
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.677904) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 2049691 bytes OK
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.677929) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.679356) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.679376) EVENT_LOG_v1 {"time_micros": 1764406030679369, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.679397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 4920325, prev total WAL file size 4920325, number of live WAL files 2.
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.681369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353036' seq:0, type:0; will stop at (end)
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(2001KB)], [138(12MB)]
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030681444, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 14906000, "oldest_snapshot_seqno": -1}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 10121 keys, 12178064 bytes, temperature: kUnknown
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030773961, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 12178064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12114353, "index_size": 37294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 265983, "raw_average_key_size": 26, "raw_value_size": 11938317, "raw_average_value_size": 1179, "num_data_blocks": 1420, "num_entries": 10121, "num_filter_entries": 10121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.774239) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 12178064 bytes
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.777324) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 131.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.3 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 10581, records dropped: 460 output_compression: NoCompression
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.777345) EVENT_LOG_v1 {"time_micros": 1764406030777335, "job": 88, "event": "compaction_finished", "compaction_time_micros": 92574, "compaction_time_cpu_micros": 56038, "output_level": 6, "num_output_files": 1, "total_output_size": 12178064, "num_input_records": 10581, "num_output_records": 10121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030777843, "job": 88, "event": "table_file_deletion", "file_number": 140}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030780431, "job": 88, "event": "table_file_deletion", "file_number": 138}
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.681215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.780495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.780502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.780506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.780509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:10.780512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:11.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:11.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:11 np0005539552 nova_compute[233724]: 2025-11-29 08:47:11.593 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:11 np0005539552 nova_compute[233724]: 2025-11-29 08:47:11.711 233728 DEBUG nova.network.neutron [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updated VIF entry in instance network info cache for port 976a5775-a543-44c7-9edd-4356ec9e3e5f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:11 np0005539552 nova_compute[233724]: 2025-11-29 08:47:11.712 233728 DEBUG nova.network.neutron [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updating instance_info_cache with network_info: [{"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:11 np0005539552 nova_compute[233724]: 2025-11-29 08:47:11.736 233728 DEBUG oslo_concurrency.lockutils [req-77f43e98-c3f1-4ab5-b8cc-9bf455e49dd4 req-754a1f85-cecf-4de0-8795-29f9ebbbca29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:11 np0005539552 nova_compute[233724]: 2025-11-29 08:47:11.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:12 np0005539552 nova_compute[233724]: 2025-11-29 08:47:12.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:13.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:13 np0005539552 nova_compute[233724]: 2025-11-29 08:47:13.803 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 03:47:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:15.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:16 np0005539552 nova_compute[233724]: 2025-11-29 08:47:16.596 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:16.974 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:16 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:16.976 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:47:16 np0005539552 nova_compute[233724]: 2025-11-29 08:47:16.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:17 np0005539552 podman[320613]: 2025-11-29 08:47:17.021481866 +0000 UTC m=+0.097390903 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:17 np0005539552 podman[320612]: 2025-11-29 08:47:17.052654815 +0000 UTC m=+0.131632845 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 03:47:17 np0005539552 podman[320614]: 2025-11-29 08:47:17.065565182 +0000 UTC m=+0.135161409 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:47:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:17.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:17 np0005539552 nova_compute[233724]: 2025-11-29 08:47:17.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:18Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:2e:f0 10.100.0.10
Nov 29 03:47:18 np0005539552 ovn_controller[133798]: 2025-11-29T08:47:18Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:2e:f0 10.100.0.10
Nov 29 03:47:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:19 np0005539552 nova_compute[233724]: 2025-11-29 08:47:19.905 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:20.653 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:20.654 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:20.654 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:21.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:21.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:21 np0005539552 nova_compute[233724]: 2025-11-29 08:47:21.598 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:22 np0005539552 nova_compute[233724]: 2025-11-29 08:47:22.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:22 np0005539552 nova_compute[233724]: 2025-11-29 08:47:22.841 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:23.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:23.979 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:25.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:25.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:26 np0005539552 nova_compute[233724]: 2025-11-29 08:47:26.601 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1339059768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:27.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:27.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:27 np0005539552 nova_compute[233724]: 2025-11-29 08:47:27.507 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:29.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:31.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:31.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:31 np0005539552 nova_compute[233724]: 2025-11-29 08:47:31.603 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539552 nova_compute[233724]: 2025-11-29 08:47:32.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:33.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:33.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:35.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:36 np0005539552 nova_compute[233724]: 2025-11-29 08:47:36.605 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.126746) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057126814, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 529, "num_deletes": 251, "total_data_size": 755354, "memory_usage": 766312, "flush_reason": "Manual Compaction"}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057136746, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 498197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70436, "largest_seqno": 70960, "table_properties": {"data_size": 495437, "index_size": 795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6778, "raw_average_key_size": 19, "raw_value_size": 489836, "raw_average_value_size": 1379, "num_data_blocks": 35, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406030, "oldest_key_time": 1764406030, "file_creation_time": 1764406057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 10058 microseconds, and 2219 cpu microseconds.
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.136812) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 498197 bytes OK
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.136837) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.139385) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.139437) EVENT_LOG_v1 {"time_micros": 1764406057139404, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.139460) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 752252, prev total WAL file size 752252, number of live WAL files 2.
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.140290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(486KB)], [141(11MB)]
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057140334, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 12676261, "oldest_snapshot_seqno": -1}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9962 keys, 10769694 bytes, temperature: kUnknown
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057221189, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 10769694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10708302, "index_size": 35359, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263399, "raw_average_key_size": 26, "raw_value_size": 10536203, "raw_average_value_size": 1057, "num_data_blocks": 1331, "num_entries": 9962, "num_filter_entries": 9962, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.221540) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10769694 bytes
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.223065) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.6 rd, 133.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(47.1) write-amplify(21.6) OK, records in: 10476, records dropped: 514 output_compression: NoCompression
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.223098) EVENT_LOG_v1 {"time_micros": 1764406057223084, "job": 90, "event": "compaction_finished", "compaction_time_micros": 80935, "compaction_time_cpu_micros": 53530, "output_level": 6, "num_output_files": 1, "total_output_size": 10769694, "num_input_records": 10476, "num_output_records": 9962, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057223443, "job": 90, "event": "table_file_deletion", "file_number": 143}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057227815, "job": 90, "event": "table_file_deletion", "file_number": 141}
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.140155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.227894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.227902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.227904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.227906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:47:37.227908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:37 np0005539552 nova_compute[233724]: 2025-11-29 08:47:37.512 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.172252802 +0000 UTC m=+0.045093455 container create ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:38 np0005539552 systemd[1]: Started libpod-conmon-ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec.scope.
Nov 29 03:47:38 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.153477917 +0000 UTC m=+0.026318600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.249381488 +0000 UTC m=+0.122222151 container init ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.256054168 +0000 UTC m=+0.128894821 container start ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.258964436 +0000 UTC m=+0.131805089 container attach ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:47:38 np0005539552 gallant_lehmann[321025]: 167 167
Nov 29 03:47:38 np0005539552 systemd[1]: libpod-ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec.scope: Deactivated successfully.
Nov 29 03:47:38 np0005539552 conmon[321025]: conmon ff65ff1a8ff585709647 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec.scope/container/memory.events
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.262253395 +0000 UTC m=+0.135094048 container died ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:47:38 np0005539552 systemd[1]: var-lib-containers-storage-overlay-720a9aec633aed19537f1547d553646ec98d60f6fb1a07c4e759bab57729d573-merged.mount: Deactivated successfully.
Nov 29 03:47:38 np0005539552 podman[321009]: 2025-11-29 08:47:38.301563303 +0000 UTC m=+0.174403956 container remove ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:47:38 np0005539552 systemd[1]: libpod-conmon-ff65ff1a8ff585709647fd33e458b117408223dfe41138df65cd40555ca338ec.scope: Deactivated successfully.
Nov 29 03:47:38 np0005539552 podman[321050]: 2025-11-29 08:47:38.469115753 +0000 UTC m=+0.048984150 container create 80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_allen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:47:38 np0005539552 systemd[1]: Started libpod-conmon-80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4.scope.
Nov 29 03:47:38 np0005539552 podman[321050]: 2025-11-29 08:47:38.450377799 +0000 UTC m=+0.030246216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:38 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:47:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881df29684c0b7802fbea74b25f5cced2e032929a0a7d86a042bec1b34ba32d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881df29684c0b7802fbea74b25f5cced2e032929a0a7d86a042bec1b34ba32d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881df29684c0b7802fbea74b25f5cced2e032929a0a7d86a042bec1b34ba32d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:38 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881df29684c0b7802fbea74b25f5cced2e032929a0a7d86a042bec1b34ba32d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:38 np0005539552 podman[321050]: 2025-11-29 08:47:38.565470867 +0000 UTC m=+0.145339284 container init 80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:38 np0005539552 podman[321050]: 2025-11-29 08:47:38.572793694 +0000 UTC m=+0.152662091 container start 80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_allen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:47:38 np0005539552 podman[321050]: 2025-11-29 08:47:38.575580099 +0000 UTC m=+0.155448676 container attach 80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:47:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:47:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/168909624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:47:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:47:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/168909624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:47:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:39.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:39 np0005539552 awesome_allen[321066]: [
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:    {
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "available": false,
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "ceph_device": false,
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "lsm_data": {},
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "lvs": [],
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "path": "/dev/sr0",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "rejected_reasons": [
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "Insufficient space (<5GB)",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "Has a FileSystem"
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        ],
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        "sys_api": {
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "actuators": null,
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "device_nodes": "sr0",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "devname": "sr0",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "human_readable_size": "482.00 KB",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "id_bus": "ata",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "model": "QEMU DVD-ROM",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "nr_requests": "2",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "parent": "/dev/sr0",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "partitions": {},
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "path": "/dev/sr0",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "removable": "1",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "rev": "2.5+",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "ro": "0",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "rotational": "1",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "sas_address": "",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "sas_device_handle": "",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "scheduler_mode": "mq-deadline",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "sectors": 0,
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "sectorsize": "2048",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "size": 493568.0,
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "support_discard": "2048",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "type": "disk",
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:            "vendor": "QEMU"
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:        }
Nov 29 03:47:39 np0005539552 awesome_allen[321066]:    }
Nov 29 03:47:39 np0005539552 awesome_allen[321066]: ]
Nov 29 03:47:39 np0005539552 systemd[1]: libpod-80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4.scope: Deactivated successfully.
Nov 29 03:47:39 np0005539552 systemd[1]: libpod-80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4.scope: Consumed 1.213s CPU time.
Nov 29 03:47:39 np0005539552 podman[321050]: 2025-11-29 08:47:39.79791986 +0000 UTC m=+1.377788247 container died 80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:39 np0005539552 systemd[1]: var-lib-containers-storage-overlay-881df29684c0b7802fbea74b25f5cced2e032929a0a7d86a042bec1b34ba32d5-merged.mount: Deactivated successfully.
Nov 29 03:47:39 np0005539552 podman[321050]: 2025-11-29 08:47:39.867811551 +0000 UTC m=+1.447679958 container remove 80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:47:39 np0005539552 systemd[1]: libpod-conmon-80e6c77a03c2038afa75a1de2b7101fa6660c06e45fded567213d3734b7abbb4.scope: Deactivated successfully.
Nov 29 03:47:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:41.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:41 np0005539552 nova_compute[233724]: 2025-11-29 08:47:41.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:42 np0005539552 nova_compute[233724]: 2025-11-29 08:47:42.515 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:43.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:45.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:45.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.611 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.950 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:47:46 np0005539552 nova_compute[233724]: 2025-11-29 08:47:46.950 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:47 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:47 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:47.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3291252116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.439 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:47 np0005539552 podman[322427]: 2025-11-29 08:47:47.574124636 +0000 UTC m=+0.074009083 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:47:47 np0005539552 podman[322425]: 2025-11-29 08:47:47.583440867 +0000 UTC m=+0.090189419 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:47 np0005539552 podman[322428]: 2025-11-29 08:47:47.626440724 +0000 UTC m=+0.115864750 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.768 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.769 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.947 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.948 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3943MB free_disk=20.900936126708984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:47 np0005539552 nova_compute[233724]: 2025-11-29 08:47:47.949 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.112 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 81f8ea10-4440-4354-acb8-7c0026e214f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.112 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.112 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:47:48 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.291 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1600270025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.761 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.769 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.789 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.812 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:47:48 np0005539552 nova_compute[233724]: 2025-11-29 08:47:48.812 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:49.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:49.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:51 np0005539552 nova_compute[233724]: 2025-11-29 08:47:51.612 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:52 np0005539552 nova_compute[233724]: 2025-11-29 08:47:52.520 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:53.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:53.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:56 np0005539552 nova_compute[233724]: 2025-11-29 08:47:56.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:56 np0005539552 nova_compute[233724]: 2025-11-29 08:47:56.813 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:56 np0005539552 nova_compute[233724]: 2025-11-29 08:47:56.813 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:56 np0005539552 nova_compute[233724]: 2025-11-29 08:47:56.814 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:47:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:57.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:57.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:57 np0005539552 nova_compute[233724]: 2025-11-29 08:47:57.522 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:57 np0005539552 nova_compute[233724]: 2025-11-29 08:47:57.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.818 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.818 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.836 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.921 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.922 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.930 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:47:58 np0005539552 nova_compute[233724]: 2025-11-29 08:47:58.931 233728 INFO nova.compute.claims [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.055 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:47:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:59.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/78909111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:59.525 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:47:59.526 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.526 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.553 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.563 233728 DEBUG nova.compute.provider_tree [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.582 233728 DEBUG nova.scheduler.client.report [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.612 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.613 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.667 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.668 233728 DEBUG nova.network.neutron [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.689 233728 INFO nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.708 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.765 233728 INFO nova.virt.block_device [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Booting with volume bf7e0110-334f-4129-a24d-43eb68f20833 at /dev/vda#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:59 np0005539552 nova_compute[233724]: 2025-11-29 08:47:59.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.209 233728 DEBUG os_brick.utils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.212 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.223 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.224 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9182c8-34e6-414e-9085-ba6f9780fd36]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.226 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.239 233728 DEBUG nova.policy [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.237 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.237 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa150a3f-06cc-4b63-bfcd-4204e7805a72]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.245 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.254 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.254 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7fd43f-e3b5-4dd0-b7c8-0736de0a4614]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.256 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[d609bdd5-f019-4668-9864-a057491960bd]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.257 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.292 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.296 233728 DEBUG os_brick.initiator.connectors.lightos [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.296 233728 DEBUG os_brick.initiator.connectors.lightos [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.297 233728 DEBUG os_brick.initiator.connectors.lightos [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.298 233728 DEBUG os_brick.utils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:48:00 np0005539552 nova_compute[233724]: 2025-11-29 08:48:00.298 233728 DEBUG nova.virt.block_device [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updating existing volume attachment record: cb358295-d686-40bf-92e5-eaba4abcec9a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:48:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3487480216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.017 233728 DEBUG nova.network.neutron [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Successfully created port: cdeb0376-cedf-4745-964f-897685f6d3de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.225 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.228 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.229 233728 INFO nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Creating image(s)#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.229 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.230 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Ensure instance console log exists: /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.231 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.231 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.232 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:01.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:01 np0005539552 nova_compute[233724]: 2025-11-29 08:48:01.619 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.523 233728 DEBUG nova.network.neutron [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Successfully updated port: cdeb0376-cedf-4745-964f-897685f6d3de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.525 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.545 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.545 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.545 233728 DEBUG nova.network.neutron [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.796 233728 DEBUG nova.network.neutron [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.973 233728 DEBUG nova.compute.manager [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-changed-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.973 233728 DEBUG nova.compute.manager [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Refreshing instance network info cache due to event network-changed-cdeb0376-cedf-4745-964f-897685f6d3de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:02 np0005539552 nova_compute[233724]: 2025-11-29 08:48:02.974 233728 DEBUG oslo_concurrency.lockutils [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.454 233728 DEBUG nova.compute.manager [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-changed-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.455 233728 DEBUG nova.compute.manager [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Refreshing instance network info cache due to event network-changed-976a5775-a543-44c7-9edd-4356ec9e3e5f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.455 233728 DEBUG oslo_concurrency.lockutils [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.456 233728 DEBUG oslo_concurrency.lockutils [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.456 233728 DEBUG nova.network.neutron [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Refreshing network info cache for port 976a5775-a543-44c7-9edd-4356ec9e3e5f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.521 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.521 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.522 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.522 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.522 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.524 233728 INFO nova.compute.manager [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Terminating instance#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.526 233728 DEBUG nova.compute.manager [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:48:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:03.528 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:03 np0005539552 kernel: tap976a5775-a5 (unregistering): left promiscuous mode
Nov 29 03:48:03 np0005539552 NetworkManager[48926]: <info>  [1764406083.6563] device (tap976a5775-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.666 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:03Z|00941|binding|INFO|Releasing lport 976a5775-a543-44c7-9edd-4356ec9e3e5f from this chassis (sb_readonly=0)
Nov 29 03:48:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:03Z|00942|binding|INFO|Setting lport 976a5775-a543-44c7-9edd-4356ec9e3e5f down in Southbound
Nov 29 03:48:03 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:03Z|00943|binding|INFO|Removing iface tap976a5775-a5 ovn-installed in OVS
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.669 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.670 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:03.675 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:2e:f0 10.100.0.10'], port_security=['fa:16:3e:1c:2e:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '81f8ea10-4440-4354-acb8-7c0026e214f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21923162-fe0c-4f50-88b5-19d1d684fafc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8aeea424-b712-4a7c-90b2-3d14349e7c5b fc98692a-6001-46c7-9683-7744fbf55ac5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdba913e-28a6-4e48-bfcc-da07411da078, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=976a5775-a543-44c7-9edd-4356ec9e3e5f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:03.677 143400 INFO neutron.agent.ovn.metadata.agent [-] Port 976a5775-a543-44c7-9edd-4356ec9e3e5f in datapath 21923162-fe0c-4f50-88b5-19d1d684fafc unbound from our chassis#033[00m
Nov 29 03:48:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:03.679 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21923162-fe0c-4f50-88b5-19d1d684fafc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:48:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:03.681 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[3da508ff-488e-4bc2-bf25-4b8d1d553ebe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:03 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:03.682 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc namespace which is not needed anymore#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.693 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Nov 29 03:48:03 np0005539552 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d5.scope: Consumed 17.731s CPU time.
Nov 29 03:48:03 np0005539552 systemd-machined[196379]: Machine qemu-94-instance-000000d5 terminated.
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.751 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.758 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.766 233728 INFO nova.virt.libvirt.driver [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Instance destroyed successfully.#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.767 233728 DEBUG nova.objects.instance [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid 81f8ea10-4440-4354-acb8-7c0026e214f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.810 233728 DEBUG nova.virt.libvirt.vif [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1831388980',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=213,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIqztyNxcFNGX+Jw/TzqSaf7IKcldRL0GvIZInM3adtkD/2hkuxV9cfJutZL0j7me7Di9qNueMjWPCgJZ95kQGG++Pk0vOKucPX3FrXhBYJOxZ/WNUp453mg0OmkvApWRQ==',key_name='tempest-TestSecurityGroupsBasicOps-1327998220',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-0eftkbzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:47:05Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=81f8ea10-4440-4354-acb8-7c0026e214f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.811 233728 DEBUG nova.network.os_vif_util [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.812 233728 DEBUG nova.network.os_vif_util [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.812 233728 DEBUG os_vif [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.814 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap976a5775-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.820 233728 INFO os_vif [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:2e:f0,bridge_name='br-int',has_traffic_filtering=True,id=976a5775-a543-44c7-9edd-4356ec9e3e5f,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap976a5775-a5')#033[00m
Nov 29 03:48:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:03 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [NOTICE]   (320497) : haproxy version is 2.8.14-c23fe91
Nov 29 03:48:03 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [NOTICE]   (320497) : path to executable is /usr/sbin/haproxy
Nov 29 03:48:03 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [WARNING]  (320497) : Exiting Master process...
Nov 29 03:48:03 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [ALERT]    (320497) : Current worker (320499) exited with code 143 (Terminated)
Nov 29 03:48:03 np0005539552 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[320493]: [WARNING]  (320497) : All workers exited. Exiting... (0)
Nov 29 03:48:03 np0005539552 systemd[1]: libpod-ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861.scope: Deactivated successfully.
Nov 29 03:48:03 np0005539552 conmon[320493]: conmon ccd941e5137b67850642 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861.scope/container/memory.events
Nov 29 03:48:03 np0005539552 podman[322634]: 2025-11-29 08:48:03.908720311 +0000 UTC m=+0.067244081 container died ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:48:03 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861-userdata-shm.mount: Deactivated successfully.
Nov 29 03:48:03 np0005539552 systemd[1]: var-lib-containers-storage-overlay-150682837b8f98102a5d6b265b41765ab31da735d2c1c62fe3bdb3e725d48b72-merged.mount: Deactivated successfully.
Nov 29 03:48:03 np0005539552 nova_compute[233724]: 2025-11-29 08:48:03.982 233728 DEBUG nova.network.neutron [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updating instance_info_cache with network_info: [{"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:04 np0005539552 podman[322634]: 2025-11-29 08:48:04.046384537 +0000 UTC m=+0.204908237 container cleanup ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:48:04 np0005539552 systemd[1]: libpod-conmon-ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861.scope: Deactivated successfully.
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.061 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.062 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Instance network_info: |[{"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.063 233728 DEBUG oslo_concurrency.lockutils [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.064 233728 DEBUG nova.network.neutron [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Refreshing network info cache for port cdeb0376-cedf-4745-964f-897685f6d3de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.070 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Start _get_guest_xml network_info=[{"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bf7e0110-334f-4129-a24d-43eb68f20833', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bf7e0110-334f-4129-a24d-43eb68f20833', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8e6ec5c0-7d7c-41e0-9f48-9e526008b00c', 'attached_at': '', 'detached_at': '', 'volume_id': 'bf7e0110-334f-4129-a24d-43eb68f20833', 'serial': 'bf7e0110-334f-4129-a24d-43eb68f20833'}, 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'cb358295-d686-40bf-92e5-eaba4abcec9a', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.078 233728 WARNING nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.090 233728 DEBUG nova.virt.libvirt.host [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.091 233728 DEBUG nova.virt.libvirt.host [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.097 233728 DEBUG nova.virt.libvirt.host [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.098 233728 DEBUG nova.virt.libvirt.host [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.099 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.099 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.100 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.100 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.100 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.101 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.101 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.102 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.102 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.102 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.102 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.103 233728 DEBUG nova.virt.hardware [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.137 233728 DEBUG nova.storage.rbd_utils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.143 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:04 np0005539552 podman[322682]: 2025-11-29 08:48:04.215404486 +0000 UTC m=+0.139498356 container remove ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.224 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a236770a-f8d3-4534-b191-a92a32d40511]: (4, ('Sat Nov 29 08:48:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc (ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861)\nccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861\nSat Nov 29 08:48:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc (ccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861)\nccd941e5137b6785064230b594bec785e42f9933dd8e8fff2fae9358bbda2861\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.225 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5586d76d-b914-4920-b6a4-187cb43c1359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.226 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21923162-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:04 np0005539552 kernel: tap21923162-f0: left promiscuous mode
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.228 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.248 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2b243253-9e2f-47ed-b147-a9119486a5f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.251 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.268 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f97384-13f1-4a08-89a1-68894e89beff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.269 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3ac7f6-7840-4eac-895b-c3be8a5a0bf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.287 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[2be676a4-6792-44ba-af75-da7eba210b30]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906915, 'reachable_time': 21822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322727, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.291 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:48:04 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:04.291 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[be8b5ac2-17a2-487d-bc8a-b862525c8ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:04 np0005539552 systemd[1]: run-netns-ovnmeta\x2d21923162\x2dfe0c\x2d4f50\x2d88b5\x2d19d1d684fafc.mount: Deactivated successfully.
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.549 233728 INFO nova.virt.libvirt.driver [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Deleting instance files /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2_del#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.551 233728 INFO nova.virt.libvirt.driver [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Deletion of /var/lib/nova/instances/81f8ea10-4440-4354-acb8-7c0026e214f2_del complete#033[00m
Nov 29 03:48:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/466539900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.643 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.698 233728 DEBUG nova.virt.libvirt.vif [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-616866148',display_name='tempest-TestVolumeBootPattern-volume-backed-server-616866148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-616866148',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ulNScxkLomT3KlqY7WgNFKNe+0JBK36+ZAjL1G+wn5E1JZQQEO2iNWUZiebxVVxIzMjtJPgpywxuPu+VOiXPU2+U2uS7TbUlakOcF8J45n6B4bN1PXCw5j3OOEq1iw==',key_name='tempest-keypair-1098426915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-h27l0p4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=8e6ec5c0-7d7c-41e0-9f48-9e526008b00c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.699 233728 DEBUG nova.network.os_vif_util [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.700 233728 DEBUG nova.network.os_vif_util [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.701 233728 DEBUG nova.objects.instance [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.707 233728 INFO nova.compute.manager [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.708 233728 DEBUG oslo.service.loopingcall [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.709 233728 DEBUG nova.compute.manager [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.709 233728 DEBUG nova.network.neutron [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.716 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <uuid>8e6ec5c0-7d7c-41e0-9f48-9e526008b00c</uuid>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <name>instance-000000d9</name>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-616866148</nova:name>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:48:04</nova:creationTime>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <nova:port uuid="cdeb0376-cedf-4745-964f-897685f6d3de">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <entry name="serial">8e6ec5c0-7d7c-41e0-9f48-9e526008b00c</entry>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <entry name="uuid">8e6ec5c0-7d7c-41e0-9f48-9e526008b00c</entry>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_disk.config">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-bf7e0110-334f-4129-a24d-43eb68f20833">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <serial>bf7e0110-334f-4129-a24d-43eb68f20833</serial>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:d2:c2:41"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <target dev="tapcdeb0376-ce"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/console.log" append="off"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:48:04 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:48:04 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:48:04 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:48:04 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.718 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Preparing to wait for external event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.718 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.718 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.719 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.719 233728 DEBUG nova.virt.libvirt.vif [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-616866148',display_name='tempest-TestVolumeBootPattern-volume-backed-server-616866148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-616866148',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ulNScxkLomT3KlqY7WgNFKNe+0JBK36+ZAjL1G+wn5E1JZQQEO2iNWUZiebxVVxIzMjtJPgpywxuPu+VOiXPU2+U2uS7TbUlakOcF8J45n6B4bN1PXCw5j3OOEq1iw==',key_name='tempest-keypair-1098426915',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-h27l0p4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=8e6ec5c0-7d7c-41e0-9f48-9e526008b00c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.720 233728 DEBUG nova.network.os_vif_util [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.720 233728 DEBUG nova.network.os_vif_util [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.721 233728 DEBUG os_vif [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.722 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.722 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.723 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.727 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.727 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdeb0376-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.728 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcdeb0376-ce, col_values=(('external_ids', {'iface-id': 'cdeb0376-cedf-4745-964f-897685f6d3de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:c2:41', 'vm-uuid': '8e6ec5c0-7d7c-41e0-9f48-9e526008b00c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:04 np0005539552 NetworkManager[48926]: <info>  [1764406084.7305] manager: (tapcdeb0376-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.730 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.735 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.736 233728 INFO os_vif [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce')#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.820 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.820 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.820 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:d2:c2:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.821 233728 INFO nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Using config drive#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.852 233728 DEBUG nova.storage.rbd_utils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:48:04 np0005539552 nova_compute[233724]: 2025-11-29 08:48:04.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.053 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.054 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.054 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.058 233728 DEBUG nova.compute.manager [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-vif-unplugged-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.058 233728 DEBUG oslo_concurrency.lockutils [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.059 233728 DEBUG oslo_concurrency.lockutils [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.059 233728 DEBUG oslo_concurrency.lockutils [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.060 233728 DEBUG nova.compute.manager [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] No waiting events found dispatching network-vif-unplugged-976a5775-a543-44c7-9edd-4356ec9e3e5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.060 233728 DEBUG nova.compute.manager [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-vif-unplugged-976a5775-a543-44c7-9edd-4356ec9e3e5f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.060 233728 DEBUG nova.compute.manager [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.061 233728 DEBUG oslo_concurrency.lockutils [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.061 233728 DEBUG oslo_concurrency.lockutils [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.062 233728 DEBUG oslo_concurrency.lockutils [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.062 233728 DEBUG nova.compute.manager [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] No waiting events found dispatching network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.063 233728 WARNING nova.compute.manager [req-cc17d464-59dd-4476-b76c-31c793552ee8 req-0e46897f-df55-4e0c-a918-9529d506e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received unexpected event network-vif-plugged-976a5775-a543-44c7-9edd-4356ec9e3e5f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:48:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 03:48:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:05.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:05.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.430 233728 INFO nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Creating config drive at /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/disk.config#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.441 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwc4aci0h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.480 233728 DEBUG nova.network.neutron [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updated VIF entry in instance network info cache for port 976a5775-a543-44c7-9edd-4356ec9e3e5f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.481 233728 DEBUG nova.network.neutron [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updating instance_info_cache with network_info: [{"id": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "address": "fa:16:3e:1c:2e:f0", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap976a5775-a5", "ovs_interfaceid": "976a5775-a543-44c7-9edd-4356ec9e3e5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.595 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwc4aci0h" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.651 233728 DEBUG nova.storage.rbd_utils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.658 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/disk.config 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.701 233728 DEBUG oslo_concurrency.lockutils [req-14b49e52-dd97-4162-9490-3d85b33ef3e9 req-4fc26d52-0e39-4b46-a203-7ce1d787ff11 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-81f8ea10-4440-4354-acb8-7c0026e214f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.728 233728 DEBUG nova.network.neutron [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.749 233728 INFO nova.compute.manager [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Took 1.04 seconds to deallocate network for instance.#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.890 233728 DEBUG nova.network.neutron [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updated VIF entry in instance network info cache for port cdeb0376-cedf-4745-964f-897685f6d3de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.891 233728 DEBUG nova.network.neutron [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updating instance_info_cache with network_info: [{"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.990 233728 DEBUG nova.compute.manager [req-98b74d8f-cc26-404d-93f4-5f36a86418d1 req-48b8dd79-4d2f-447b-b87c-58bb70878efd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Received event network-vif-deleted-976a5775-a543-44c7-9edd-4356ec9e3e5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.994 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:05 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.996 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:05.999 233728 DEBUG oslo_concurrency.lockutils [req-c8cd8795-aec5-4f47-8dd8-47996b47eae8 req-3abf2551-a312-492a-81fe-82f104043b02 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.073 233728 DEBUG oslo_concurrency.processutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.109 233728 DEBUG oslo_concurrency.processutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/disk.config 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.111 233728 INFO nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Deleting local config drive /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:48:06 np0005539552 kernel: tapcdeb0376-ce: entered promiscuous mode
Nov 29 03:48:06 np0005539552 systemd-udevd[322606]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:06 np0005539552 NetworkManager[48926]: <info>  [1764406086.1723] manager: (tapcdeb0376-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 29 03:48:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:06Z|00944|binding|INFO|Claiming lport cdeb0376-cedf-4745-964f-897685f6d3de for this chassis.
Nov 29 03:48:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:06Z|00945|binding|INFO|cdeb0376-cedf-4745-964f-897685f6d3de: Claiming fa:16:3e:d2:c2:41 10.100.0.11
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.177 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:06 np0005539552 NetworkManager[48926]: <info>  [1764406086.1901] device (tapcdeb0376-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:48:06 np0005539552 NetworkManager[48926]: <info>  [1764406086.1912] device (tapcdeb0376-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:48:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:06Z|00946|binding|INFO|Setting lport cdeb0376-cedf-4745-964f-897685f6d3de ovn-installed in OVS
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.197 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.201 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:06 np0005539552 systemd-machined[196379]: New machine qemu-95-instance-000000d9.
Nov 29 03:48:06 np0005539552 systemd[1]: Started Virtual Machine qemu-95-instance-000000d9.
Nov 29 03:48:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:06Z|00947|binding|INFO|Setting lport cdeb0376-cedf-4745-964f-897685f6d3de up in Southbound
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.318 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c2:41 10.100.0.11'], port_security=['fa:16:3e:d2:c2:41 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e6ec5c0-7d7c-41e0-9f48-9e526008b00c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b482e439-6d52-4603-bdd4-81ab3cca06ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=cdeb0376-cedf-4745-964f-897685f6d3de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.319 143400 INFO neutron.agent.ovn.metadata.agent [-] Port cdeb0376-cedf-4745-964f-897685f6d3de in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.321 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.337 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d4066434-e103-495c-bc61-b796ce7b0a1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.338 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d510715-d1 in ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.340 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d510715-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.340 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[abb9cb29-6599-4dc8-b7de-d5cd13c60096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.341 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8622a-f227-408e-864e-4790f60a10fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.356 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[43f2b537-a163-415f-b6af-ac48c8bd4217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.386 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[18c53c25-3cbe-4181-84bd-249d5c0e0f3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.413 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[886ecd0b-6317-48ac-b999-de1bd5472b23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 NetworkManager[48926]: <info>  [1764406086.4208] manager: (tap3d510715-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.422 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b51dda-0e7c-4abd-8e6e-5110dcb311ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.459 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[5068bb86-8591-47a6-80dd-a2cddcaee2a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.462 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[2354668a-c9a8-4639-b243-5af7bb5ea861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 NetworkManager[48926]: <info>  [1764406086.4784] device (tap3d510715-d0): carrier: link connected
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.482 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8264df-5d5d-4966-8b4c-3b698c920fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.496 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2d51d7-1b93-48f3-b9ce-95e03bd8a311]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913216, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322863, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.509 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f3034651-11eb-486c-996c-bdefcb6bb865]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6190'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913216, 'tstamp': 913216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322864, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1326377028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.524 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b96143de-1187-4504-85fb-8ab26fb995ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913216, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322867, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.532 233728 DEBUG oslo_concurrency.processutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.537 233728 DEBUG nova.compute.provider_tree [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.557 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9d85628d-17b7-40b8-98c0-61b8e3af7821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.591 233728 DEBUG nova.scheduler.client.report [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.625 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a19215a0-964f-4b53-afef-4c0439c0de44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.626 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.626 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.627 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.627 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:06 np0005539552 NetworkManager[48926]: <info>  [1764406086.6297] manager: (tap3d510715-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.630 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:06 np0005539552 kernel: tap3d510715-d0: entered promiscuous mode
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.634 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.635 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:06 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:06Z|00948|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.654 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.656 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a2aa3c91-0d15-43fe-9e3b-25ec3775a09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.658 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:48:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:06.658 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'env', 'PROCESS_TAG=haproxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d510715-dc99-4870-8ae9-ff599ae1a9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.758 233728 INFO nova.scheduler.client.report [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance 81f8ea10-4440-4354-acb8-7c0026e214f2#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.924 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406086.923481, 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.925 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:48:06 np0005539552 nova_compute[233724]: 2025-11-29 08:48:06.986 233728 DEBUG oslo_concurrency.lockutils [None req-89cec421-9bef-4e5e-8a6e-aa88e2a588cc de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "81f8ea10-4440-4354-acb8-7c0026e214f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:07 np0005539552 podman[322940]: 2025-11-29 08:48:07.006710341 +0000 UTC m=+0.033293497 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.268 233728 DEBUG nova.compute.manager [req-6201f7e8-cb65-459a-8e59-928e4034ffa2 req-bf9d44ba-5118-469c-bc25-2205887ba846 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.269 233728 DEBUG oslo_concurrency.lockutils [req-6201f7e8-cb65-459a-8e59-928e4034ffa2 req-bf9d44ba-5118-469c-bc25-2205887ba846 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.270 233728 DEBUG oslo_concurrency.lockutils [req-6201f7e8-cb65-459a-8e59-928e4034ffa2 req-bf9d44ba-5118-469c-bc25-2205887ba846 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.271 233728 DEBUG oslo_concurrency.lockutils [req-6201f7e8-cb65-459a-8e59-928e4034ffa2 req-bf9d44ba-5118-469c-bc25-2205887ba846 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.271 233728 DEBUG nova.compute.manager [req-6201f7e8-cb65-459a-8e59-928e4034ffa2 req-bf9d44ba-5118-469c-bc25-2205887ba846 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Processing event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.272 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.281 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.287 233728 INFO nova.virt.libvirt.driver [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Instance spawned successfully.#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.287 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:48:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:07.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.343 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.349 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:07.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.454 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.455 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406086.9245558, 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.455 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:48:07 np0005539552 podman[322940]: 2025-11-29 08:48:07.461368069 +0000 UTC m=+0.487951195 container create ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.463 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.463 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.464 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.465 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.465 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.466 233728 DEBUG nova.virt.libvirt.driver [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.528 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.618 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.623 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406087.2784305, 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.624 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:48:07 np0005539552 systemd[1]: Started libpod-conmon-ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517.scope.
Nov 29 03:48:07 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:48:07 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853f6491efbfa599397c83579570cb7247e8205e358b52dbdd102880dd950411/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:07 np0005539552 podman[322940]: 2025-11-29 08:48:07.729479196 +0000 UTC m=+0.756062312 container init ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:48:07 np0005539552 podman[322940]: 2025-11-29 08:48:07.73517381 +0000 UTC m=+0.761756906 container start ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.758 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.763 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.769 233728 INFO nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Took 6.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.770 233728 DEBUG nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.786 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:07 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [NOTICE]   (322960) : New worker (322962) forked
Nov 29 03:48:07 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [NOTICE]   (322960) : Loading success.
Nov 29 03:48:07 np0005539552 nova_compute[233724]: 2025-11-29 08:48:07.988 233728 INFO nova.compute.manager [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Took 9.10 seconds to build instance.#033[00m
Nov 29 03:48:08 np0005539552 nova_compute[233724]: 2025-11-29 08:48:08.013 233728 DEBUG oslo_concurrency.lockutils [None req-6ea5850d-335e-409c-b61d-73554cbe08b6 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:09.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:09.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.730 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.802 233728 DEBUG nova.compute.manager [req-67bc295a-5ab9-4f1a-9126-9d3413f96e17 req-2202bbf6-d33c-4a3e-bbba-f5aa7a1f9a14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.803 233728 DEBUG oslo_concurrency.lockutils [req-67bc295a-5ab9-4f1a-9126-9d3413f96e17 req-2202bbf6-d33c-4a3e-bbba-f5aa7a1f9a14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.803 233728 DEBUG oslo_concurrency.lockutils [req-67bc295a-5ab9-4f1a-9126-9d3413f96e17 req-2202bbf6-d33c-4a3e-bbba-f5aa7a1f9a14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.803 233728 DEBUG oslo_concurrency.lockutils [req-67bc295a-5ab9-4f1a-9126-9d3413f96e17 req-2202bbf6-d33c-4a3e-bbba-f5aa7a1f9a14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.804 233728 DEBUG nova.compute.manager [req-67bc295a-5ab9-4f1a-9126-9d3413f96e17 req-2202bbf6-d33c-4a3e-bbba-f5aa7a1f9a14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] No waiting events found dispatching network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:09 np0005539552 nova_compute[233724]: 2025-11-29 08:48:09.804 233728 WARNING nova.compute.manager [req-67bc295a-5ab9-4f1a-9126-9d3413f96e17 req-2202bbf6-d33c-4a3e-bbba-f5aa7a1f9a14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received unexpected event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de for instance with vm_state active and task_state None.#033[00m
Nov 29 03:48:10 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:10Z|00949|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:48:10 np0005539552 nova_compute[233724]: 2025-11-29 08:48:10.502 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 03:48:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:11.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:12 np0005539552 nova_compute[233724]: 2025-11-29 08:48:12.041 233728 DEBUG nova.compute.manager [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-changed-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:12 np0005539552 nova_compute[233724]: 2025-11-29 08:48:12.041 233728 DEBUG nova.compute.manager [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Refreshing instance network info cache due to event network-changed-cdeb0376-cedf-4745-964f-897685f6d3de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:12 np0005539552 nova_compute[233724]: 2025-11-29 08:48:12.041 233728 DEBUG oslo_concurrency.lockutils [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:12 np0005539552 nova_compute[233724]: 2025-11-29 08:48:12.041 233728 DEBUG oslo_concurrency.lockutils [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:12 np0005539552 nova_compute[233724]: 2025-11-29 08:48:12.042 233728 DEBUG nova.network.neutron [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Refreshing network info cache for port cdeb0376-cedf-4745-964f-897685f6d3de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:12 np0005539552 nova_compute[233724]: 2025-11-29 08:48:12.530 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:13.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:13.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:14 np0005539552 nova_compute[233724]: 2025-11-29 08:48:14.560 233728 DEBUG nova.network.neutron [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updated VIF entry in instance network info cache for port cdeb0376-cedf-4745-964f-897685f6d3de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:14 np0005539552 nova_compute[233724]: 2025-11-29 08:48:14.560 233728 DEBUG nova.network.neutron [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updating instance_info_cache with network_info: [{"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:14 np0005539552 nova_compute[233724]: 2025-11-29 08:48:14.601 233728 DEBUG oslo_concurrency.lockutils [req-969d520e-ea4b-42f8-832c-2019b025048f req-12718549-50b9-42c3-9e37-9e06ee874967 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:14 np0005539552 nova_compute[233724]: 2025-11-29 08:48:14.733 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:15.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:15.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:17.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:17.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:17 np0005539552 nova_compute[233724]: 2025-11-29 08:48:17.531 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:18 np0005539552 podman[323027]: 2025-11-29 08:48:18.004227275 +0000 UTC m=+0.078363150 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:48:18 np0005539552 podman[323026]: 2025-11-29 08:48:18.005562131 +0000 UTC m=+0.075990016 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:48:18 np0005539552 podman[323028]: 2025-11-29 08:48:18.036564146 +0000 UTC m=+0.091024031 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:48:18 np0005539552 nova_compute[233724]: 2025-11-29 08:48:18.766 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406083.7645397, 81f8ea10-4440-4354-acb8-7c0026e214f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:18 np0005539552 nova_compute[233724]: 2025-11-29 08:48:18.766 233728 INFO nova.compute.manager [-] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:48:18 np0005539552 nova_compute[233724]: 2025-11-29 08:48:18.801 233728 DEBUG nova.compute.manager [None req-e2937f17-400b-4d43-87c5-51df4da5d7ee - - - - - -] [instance: 81f8ea10-4440-4354-acb8-7c0026e214f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:19.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:19 np0005539552 nova_compute[233724]: 2025-11-29 08:48:19.737 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:20.655 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:20.655 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:20.657 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:21Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:c2:41 10.100.0.11
Nov 29 03:48:21 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:21Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:c2:41 10.100.0.11
Nov 29 03:48:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:21.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:21 np0005539552 nova_compute[233724]: 2025-11-29 08:48:21.687 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:22 np0005539552 nova_compute[233724]: 2025-11-29 08:48:22.534 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:23.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 03:48:24 np0005539552 nova_compute[233724]: 2025-11-29 08:48:24.739 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:25.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:25.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:27.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:27 np0005539552 nova_compute[233724]: 2025-11-29 08:48:27.352 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:27 np0005539552 nova_compute[233724]: 2025-11-29 08:48:27.537 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:29.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:29 np0005539552 nova_compute[233724]: 2025-11-29 08:48:29.743 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 03:48:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:31.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:31 np0005539552 nova_compute[233724]: 2025-11-29 08:48:31.481 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:32 np0005539552 nova_compute[233724]: 2025-11-29 08:48:32.540 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:33.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:34 np0005539552 nova_compute[233724]: 2025-11-29 08:48:34.746 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:35.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:37.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:37.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:37 np0005539552 nova_compute[233724]: 2025-11-29 08:48:37.544 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:39 np0005539552 nova_compute[233724]: 2025-11-29 08:48:39.146 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:39.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:39.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:39 np0005539552 nova_compute[233724]: 2025-11-29 08:48:39.748 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:41.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:41.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:42.147 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:42 np0005539552 nova_compute[233724]: 2025-11-29 08:48:42.148 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:42 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:42.149 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:42 np0005539552 nova_compute[233724]: 2025-11-29 08:48:42.545 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:43.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:43.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:44 np0005539552 nova_compute[233724]: 2025-11-29 08:48:44.751 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 03:48:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:45.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:45.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:46 np0005539552 nova_compute[233724]: 2025-11-29 08:48:46.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:46 np0005539552 nova_compute[233724]: 2025-11-29 08:48:46.947 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:46 np0005539552 nova_compute[233724]: 2025-11-29 08:48:46.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:46 np0005539552 nova_compute[233724]: 2025-11-29 08:48:46.948 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:46 np0005539552 nova_compute[233724]: 2025-11-29 08:48:46.949 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:48:46 np0005539552 nova_compute[233724]: 2025-11-29 08:48:46.949 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:47.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3089685775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.409 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:47.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.490 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.491 233728 DEBUG nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.547 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.712 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.714 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3893MB free_disk=20.97724151611328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.714 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.714 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.818 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Instance 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.819 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:48:47 np0005539552 nova_compute[233724]: 2025-11-29 08:48:47.851 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3127193095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:48 np0005539552 nova_compute[233724]: 2025-11-29 08:48:48.285 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:48 np0005539552 nova_compute[233724]: 2025-11-29 08:48:48.295 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:48 np0005539552 nova_compute[233724]: 2025-11-29 08:48:48.318 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:48 np0005539552 nova_compute[233724]: 2025-11-29 08:48:48.345 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:48:48 np0005539552 nova_compute[233724]: 2025-11-29 08:48:48.346 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:48 np0005539552 podman[323457]: 2025-11-29 08:48:48.79191521 +0000 UTC m=+0.119438776 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:48:48 np0005539552 podman[323456]: 2025-11-29 08:48:48.7930354 +0000 UTC m=+0.121690076 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:48:48 np0005539552 podman[323458]: 2025-11-29 08:48:48.838971517 +0000 UTC m=+0.164527010 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:49 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:49.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:49.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:49 np0005539552 nova_compute[233724]: 2025-11-29 08:48:49.755 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.634 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.635 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.652 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.718 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.719 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.730 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.731 233728 INFO nova.compute.claims [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:48:50 np0005539552 nova_compute[233724]: 2025-11-29 08:48:50.894 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:51.151 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:51.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4187156660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.389 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.397 233728 DEBUG nova.compute.provider_tree [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.409 233728 DEBUG nova.scheduler.client.report [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.431 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.432 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:48:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:51.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.476 233728 INFO nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.478 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.478 233728 DEBUG nova.network.neutron [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.495 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.533 233728 INFO nova.virt.block_device [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Booting with volume snapshot efcdc2f4-6579-4824-ad08-d6752dc23fc5 at /dev/vda#033[00m
Nov 29 03:48:51 np0005539552 nova_compute[233724]: 2025-11-29 08:48:51.750 233728 DEBUG nova.policy [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:48:52 np0005539552 nova_compute[233724]: 2025-11-29 08:48:52.551 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:53.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:53.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539552 nova_compute[233724]: 2025-11-29 08:48:54.642 233728 DEBUG nova.network.neutron [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Successfully created port: e338bd74-7e99-4c3b-b207-58250b25c38f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:48:54 np0005539552 nova_compute[233724]: 2025-11-29 08:48:54.759 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:55.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.912 233728 DEBUG os_brick.utils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.913 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.931 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.932 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6303be-6353-46fa-95d6-bd6f9624cfff]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.933 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.947 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.948 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[7afb28d5-adf1-49cc-be79-95a836c5a8af]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.951 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.965 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.965 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[07234721-35fb-44ff-96c1-97caf475d604]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.966 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[2418e461-9155-44c7-87d6-4fad808792cb]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:55 np0005539552 nova_compute[233724]: 2025-11-29 08:48:55.967 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.018 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.022 233728 DEBUG os_brick.initiator.connectors.lightos [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.023 233728 DEBUG os_brick.initiator.connectors.lightos [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.023 233728 DEBUG os_brick.initiator.connectors.lightos [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.024 233728 DEBUG os_brick.utils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (111ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.025 233728 DEBUG nova.virt.block_device [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updating existing volume attachment record: d9f8bb1c-749b-4658-aafc-a7734a467182 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.157 233728 DEBUG nova.network.neutron [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Successfully updated port: e338bd74-7e99-4c3b-b207-58250b25c38f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.184 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.185 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.185 233728 DEBUG nova.network.neutron [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.241 233728 DEBUG nova.compute.manager [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-changed-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.242 233728 DEBUG nova.compute.manager [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Refreshing instance network info cache due to event network-changed-e338bd74-7e99-4c3b-b207-58250b25c38f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.242 233728 DEBUG oslo_concurrency.lockutils [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:56 np0005539552 nova_compute[233724]: 2025-11-29 08:48:56.312 233728 DEBUG nova.network.neutron [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.161 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.164 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.164 233728 INFO nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Creating image(s)#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.165 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.165 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Ensure instance console log exists: /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.165 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.166 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.166 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.346 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.347 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.347 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:48:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:57.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.537 233728 DEBUG nova.network.neutron [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updating instance_info_cache with network_info: [{"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.558 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.558 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Instance network_info: |[{"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.559 233728 DEBUG oslo_concurrency.lockutils [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.559 233728 DEBUG nova.network.neutron [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Refreshing network info cache for port e338bd74-7e99-4c3b-b207-58250b25c38f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.565 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Start _get_guest_xml network_info=[{"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T08:48:44Z,direct_url=<?>,disk_format='qcow2',id=ad73a146-3d66-454f-a206-ab6de9891225,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-481397479',owner='b7ffcb23bac14ee49474df9aee5f7dae',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T08:48:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-325ae6fa-18bb-47ad-9db3-2501a8dfae90', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '325ae6fa-18bb-47ad-9db3-2501a8dfae90', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4ae60306-1870-41f5-b85a-a161040086d0', 'attached_at': '', 'detached_at': '', 'volume_id': '325ae6fa-18bb-47ad-9db3-2501a8dfae90', 'serial': '325ae6fa-18bb-47ad-9db3-2501a8dfae90'}, 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'd9f8bb1c-749b-4658-aafc-a7734a467182', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.572 233728 WARNING nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.582 233728 DEBUG nova.virt.libvirt.host [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.584 233728 DEBUG nova.virt.libvirt.host [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.589 233728 DEBUG nova.virt.libvirt.host [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.590 233728 DEBUG nova.virt.libvirt.host [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.591 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.592 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T08:48:44Z,direct_url=<?>,disk_format='qcow2',id=ad73a146-3d66-454f-a206-ab6de9891225,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-481397479',owner='b7ffcb23bac14ee49474df9aee5f7dae',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T08:48:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.592 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.593 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.593 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.593 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.594 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.594 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.594 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.595 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.595 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.595 233728 DEBUG nova.virt.hardware [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.645 233728 DEBUG nova.storage.rbd_utils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 4ae60306-1870-41f5-b85a-a161040086d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.650 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:57 np0005539552 nova_compute[233724]: 2025-11-29 08:48:57.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4054265647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.148 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.188 233728 DEBUG nova.virt.libvirt.vif [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-2032979511',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-2032979511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-2032979511',id=219,image_ref='ad73a146-3d66-454f-a206-ab6de9891225',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHenUT6iGrnE9AB32DOrySXAPbsGPEtql9EKmrnf6MOAPWmH02lhXICxW/2e7K4ydF7kYn3naX0r/9C2daNU92VDgvTLNX5UyvyBfPhzHHIeWdGcXa5BKyhPMXQSaxAflw==',key_name='tempest-keypair-35824695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-6h3dk8dq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1614567902',image_owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=4ae60306-1870-41f5-b85a-a161040086d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.188 233728 DEBUG nova.network.os_vif_util [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.190 233728 DEBUG nova.network.os_vif_util [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.192 233728 DEBUG nova.objects.instance [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ae60306-1870-41f5-b85a-a161040086d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.210 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <uuid>4ae60306-1870-41f5-b85a-a161040086d0</uuid>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <name>instance-000000db</name>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-2032979511</nova:name>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:48:57</nova:creationTime>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:root type="image" uuid="ad73a146-3d66-454f-a206-ab6de9891225"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <nova:port uuid="e338bd74-7e99-4c3b-b207-58250b25c38f">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <entry name="serial">4ae60306-1870-41f5-b85a-a161040086d0</entry>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <entry name="uuid">4ae60306-1870-41f5-b85a-a161040086d0</entry>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/4ae60306-1870-41f5-b85a-a161040086d0_disk.config">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-325ae6fa-18bb-47ad-9db3-2501a8dfae90">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <serial>325ae6fa-18bb-47ad-9db3-2501a8dfae90</serial>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:86:33:b7"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <target dev="tape338bd74-7e"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/console.log" append="off"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <input type="keyboard" bus="usb"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:48:58 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:48:58 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:48:58 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:48:58 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.210 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Preparing to wait for external event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.211 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.211 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.212 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.213 233728 DEBUG nova.virt.libvirt.vif [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-2032979511',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-2032979511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-2032979511',id=219,image_ref='ad73a146-3d66-454f-a206-ab6de9891225',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHenUT6iGrnE9AB32DOrySXAPbsGPEtql9EKmrnf6MOAPWmH02lhXICxW/2e7K4ydF7kYn3naX0r/9C2daNU92VDgvTLNX5UyvyBfPhzHHIeWdGcXa5BKyhPMXQSaxAflw==',key_name='tempest-keypair-35824695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-6h3dk8dq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1614567902',image_owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=4ae60306-1870-41f5-b85a-a161040086d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.213 233728 DEBUG nova.network.os_vif_util [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.214 233728 DEBUG nova.network.os_vif_util [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.215 233728 DEBUG os_vif [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.216 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.217 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.217 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.222 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.223 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape338bd74-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.223 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape338bd74-7e, col_values=(('external_ids', {'iface-id': 'e338bd74-7e99-4c3b-b207-58250b25c38f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:33:b7', 'vm-uuid': '4ae60306-1870-41f5-b85a-a161040086d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.226 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:58 np0005539552 NetworkManager[48926]: <info>  [1764406138.2271] manager: (tape338bd74-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.229 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.233 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.235 233728 INFO os_vif [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e')#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.302 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.302 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.303 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:86:33:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.303 233728 INFO nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Using config drive#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.345 233728 DEBUG nova.storage.rbd_utils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 4ae60306-1870-41f5-b85a-a161040086d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.856 233728 INFO nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Creating config drive at /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/disk.config#033[00m
Nov 29 03:48:58 np0005539552 nova_compute[233724]: 2025-11-29 08:48:58.870 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpazbe_69f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.033 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpazbe_69f" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.088 233728 DEBUG nova.storage.rbd_utils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 4ae60306-1870-41f5-b85a-a161040086d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.095 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/disk.config 4ae60306-1870-41f5-b85a-a161040086d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.294 233728 DEBUG oslo_concurrency.processutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/disk.config 4ae60306-1870-41f5-b85a-a161040086d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.295 233728 INFO nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Deleting local config drive /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:48:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:59 np0005539552 kernel: tape338bd74-7e: entered promiscuous mode
Nov 29 03:48:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:59.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:59 np0005539552 NetworkManager[48926]: <info>  [1764406139.3721] manager: (tape338bd74-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Nov 29 03:48:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:59Z|00950|binding|INFO|Claiming lport e338bd74-7e99-4c3b-b207-58250b25c38f for this chassis.
Nov 29 03:48:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:59Z|00951|binding|INFO|e338bd74-7e99-4c3b-b207-58250b25c38f: Claiming fa:16:3e:86:33:b7 10.100.0.10
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.374 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.381 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:33:b7 10.100.0.10'], port_security=['fa:16:3e:86:33:b7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ae60306-1870-41f5-b85a-a161040086d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'feea90f3-6a6c-4015-9759-137780c8bc82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e338bd74-7e99-4c3b-b207-58250b25c38f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.384 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e338bd74-7e99-4c3b-b207-58250b25c38f in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.388 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:48:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:59Z|00952|binding|INFO|Setting lport e338bd74-7e99-4c3b-b207-58250b25c38f ovn-installed in OVS
Nov 29 03:48:59 np0005539552 ovn_controller[133798]: 2025-11-29T08:48:59Z|00953|binding|INFO|Setting lport e338bd74-7e99-4c3b-b207-58250b25c38f up in Southbound
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.412 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.413 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[324d9bab-77df-4d17-9869-5171096880ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.418 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539552 systemd-udevd[323773]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:59 np0005539552 systemd-machined[196379]: New machine qemu-96-instance-000000db.
Nov 29 03:48:59 np0005539552 systemd[1]: Started Virtual Machine qemu-96-instance-000000db.
Nov 29 03:48:59 np0005539552 NetworkManager[48926]: <info>  [1764406139.4471] device (tape338bd74-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:48:59 np0005539552 NetworkManager[48926]: <info>  [1764406139.4482] device (tape338bd74-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:48:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:48:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.466 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[65e62b46-8e49-4962-b475-90f53b73436e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.472 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[da1c1a1d-127a-41e7-b668-b344cba8f29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.476 233728 DEBUG nova.network.neutron [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updated VIF entry in instance network info cache for port e338bd74-7e99-4c3b-b207-58250b25c38f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.477 233728 DEBUG nova.network.neutron [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updating instance_info_cache with network_info: [{"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.492 233728 DEBUG oslo_concurrency.lockutils [req-2323f61b-effb-4e75-baa7-1258f1d71897 req-84e12fa2-d661-4db5-9959-19d76a311cdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.516 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d8616d-a7c3-4d60-812f-4a47cc6238b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.542 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[cf09c39c-5e2a-4221-8a46-16a63d813fd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913216, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323784, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.561 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc3159d-a466-42e1-b0fc-e3bd67f8c0da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3d510715-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913227, 'tstamp': 913227}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323786, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3d510715-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913230, 'tstamp': 913230}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323786, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.563 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.565 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.566 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.568 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.569 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.570 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:48:59.571 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.866 233728 DEBUG nova.compute.manager [req-0e474ebd-2750-479d-b593-af23329a991f req-c0a21819-ec0d-4c36-8ac0-d4c76fb737d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.867 233728 DEBUG oslo_concurrency.lockutils [req-0e474ebd-2750-479d-b593-af23329a991f req-c0a21819-ec0d-4c36-8ac0-d4c76fb737d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.867 233728 DEBUG oslo_concurrency.lockutils [req-0e474ebd-2750-479d-b593-af23329a991f req-c0a21819-ec0d-4c36-8ac0-d4c76fb737d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.868 233728 DEBUG oslo_concurrency.lockutils [req-0e474ebd-2750-479d-b593-af23329a991f req-c0a21819-ec0d-4c36-8ac0-d4c76fb737d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.868 233728 DEBUG nova.compute.manager [req-0e474ebd-2750-479d-b593-af23329a991f req-c0a21819-ec0d-4c36-8ac0-d4c76fb737d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Processing event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:59 np0005539552 nova_compute[233724]: 2025-11-29 08:48:59.926 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.656 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.657 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406140.6553323, 4ae60306-1870-41f5-b85a-a161040086d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.657 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.662 233728 DEBUG nova.virt.libvirt.driver [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.667 233728 INFO nova.virt.libvirt.driver [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Instance spawned successfully.#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.668 233728 INFO nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Took 3.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.668 233728 DEBUG nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.705 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.709 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.734 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.734 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406140.65709, 4ae60306-1870-41f5-b85a-a161040086d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.735 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.750 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.752 233728 INFO nova.compute.manager [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Took 10.06 seconds to build instance.#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.758 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406140.6601684, 4ae60306-1870-41f5-b85a-a161040086d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.758 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.771 233728 DEBUG oslo_concurrency.lockutils [None req-6530f5f4-6aef-4ac2-b7f6-3aef4a990457 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.775 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:00 np0005539552 nova_compute[233724]: 2025-11-29 08:49:00.779 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1873307481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:01.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.022 233728 DEBUG nova.compute.manager [req-1568eb8c-cfc8-4090-9ac0-36255dd55a68 req-2548690c-cd68-418e-b626-7e629300a69f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.022 233728 DEBUG oslo_concurrency.lockutils [req-1568eb8c-cfc8-4090-9ac0-36255dd55a68 req-2548690c-cd68-418e-b626-7e629300a69f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.023 233728 DEBUG oslo_concurrency.lockutils [req-1568eb8c-cfc8-4090-9ac0-36255dd55a68 req-2548690c-cd68-418e-b626-7e629300a69f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.023 233728 DEBUG oslo_concurrency.lockutils [req-1568eb8c-cfc8-4090-9ac0-36255dd55a68 req-2548690c-cd68-418e-b626-7e629300a69f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.023 233728 DEBUG nova.compute.manager [req-1568eb8c-cfc8-4090-9ac0-36255dd55a68 req-2548690c-cd68-418e-b626-7e629300a69f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] No waiting events found dispatching network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.024 233728 WARNING nova.compute.manager [req-1568eb8c-cfc8-4090-9ac0-36255dd55a68 req-2548690c-cd68-418e-b626-7e629300a69f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received unexpected event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:49:02 np0005539552 nova_compute[233724]: 2025-11-29 08:49:02.561 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:03 np0005539552 nova_compute[233724]: 2025-11-29 08:49:03.226 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:03 np0005539552 nova_compute[233724]: 2025-11-29 08:49:03.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:04 np0005539552 nova_compute[233724]: 2025-11-29 08:49:04.944 233728 DEBUG nova.compute.manager [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-changed-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:04 np0005539552 nova_compute[233724]: 2025-11-29 08:49:04.945 233728 DEBUG nova.compute.manager [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Refreshing instance network info cache due to event network-changed-e338bd74-7e99-4c3b-b207-58250b25c38f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:04 np0005539552 nova_compute[233724]: 2025-11-29 08:49:04.945 233728 DEBUG oslo_concurrency.lockutils [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:04 np0005539552 nova_compute[233724]: 2025-11-29 08:49:04.946 233728 DEBUG oslo_concurrency.lockutils [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:04 np0005539552 nova_compute[233724]: 2025-11-29 08:49:04.946 233728 DEBUG nova.network.neutron [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Refreshing network info cache for port e338bd74-7e99-4c3b-b207-58250b25c38f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:05.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:06 np0005539552 nova_compute[233724]: 2025-11-29 08:49:06.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:06 np0005539552 nova_compute[233724]: 2025-11-29 08:49:06.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:06 np0005539552 nova_compute[233724]: 2025-11-29 08:49:06.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:49:06 np0005539552 nova_compute[233724]: 2025-11-29 08:49:06.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:49:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:07.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:07.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.556 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.557 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquired lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.557 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.557 233728 DEBUG nova.objects.instance [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.562 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.630 233728 DEBUG nova.network.neutron [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updated VIF entry in instance network info cache for port e338bd74-7e99-4c3b-b207-58250b25c38f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.631 233728 DEBUG nova.network.neutron [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updating instance_info_cache with network_info: [{"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:07 np0005539552 nova_compute[233724]: 2025-11-29 08:49:07.654 233728 DEBUG oslo_concurrency.lockutils [req-cdeb45e5-8d7d-4e93-964d-1e4187fb74b8 req-c1d20a0e-8c70-4dff-8b3a-d127f5dc0083 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4ae60306-1870-41f5-b85a-a161040086d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:08 np0005539552 nova_compute[233724]: 2025-11-29 08:49:08.229 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:09.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:09.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:09 np0005539552 nova_compute[233724]: 2025-11-29 08:49:09.600 233728 DEBUG nova.network.neutron [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updating instance_info_cache with network_info: [{"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:09 np0005539552 nova_compute[233724]: 2025-11-29 08:49:09.618 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Releasing lock "refresh_cache-8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:09 np0005539552 nova_compute[233724]: 2025-11-29 08:49:09.619 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.697228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150697276, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1406, "num_deletes": 258, "total_data_size": 2987008, "memory_usage": 3029248, "flush_reason": "Manual Compaction"}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150712562, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1936669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70965, "largest_seqno": 72366, "table_properties": {"data_size": 1930463, "index_size": 3408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13718, "raw_average_key_size": 20, "raw_value_size": 1917747, "raw_average_value_size": 2845, "num_data_blocks": 149, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406058, "oldest_key_time": 1764406058, "file_creation_time": 1764406150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 15394 microseconds, and 6868 cpu microseconds.
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.712602) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1936669 bytes OK
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.712639) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.714702) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.714715) EVENT_LOG_v1 {"time_micros": 1764406150714711, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.714730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2980280, prev total WAL file size 2980280, number of live WAL files 2.
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.715534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373636' seq:0, type:0; will stop at (end)
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1891KB)], [144(10MB)]
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150715568, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12706363, "oldest_snapshot_seqno": -1}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 10099 keys, 12541062 bytes, temperature: kUnknown
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150796814, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 12541062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12476616, "index_size": 38093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 267366, "raw_average_key_size": 26, "raw_value_size": 12300016, "raw_average_value_size": 1217, "num_data_blocks": 1444, "num_entries": 10099, "num_filter_entries": 10099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.797046) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 12541062 bytes
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.798388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 154.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.0) write-amplify(6.5) OK, records in: 10636, records dropped: 537 output_compression: NoCompression
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.798406) EVENT_LOG_v1 {"time_micros": 1764406150798397, "job": 92, "event": "compaction_finished", "compaction_time_micros": 81308, "compaction_time_cpu_micros": 43441, "output_level": 6, "num_output_files": 1, "total_output_size": 12541062, "num_input_records": 10636, "num_output_records": 10099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150798825, "job": 92, "event": "table_file_deletion", "file_number": 146}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150800889, "job": 92, "event": "table_file_deletion", "file_number": 144}
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.715473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.800914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.800918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.800919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.800921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:49:10.800922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:11.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:11.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:12 np0005539552 nova_compute[233724]: 2025-11-29 08:49:12.564 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539552 nova_compute[233724]: 2025-11-29 08:49:13.232 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:13.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:14 np0005539552 nova_compute[233724]: 2025-11-29 08:49:14.615 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:15.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:17.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:17 np0005539552 nova_compute[233724]: 2025-11-29 08:49:17.564 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:18 np0005539552 nova_compute[233724]: 2025-11-29 08:49:18.235 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:19 np0005539552 podman[323892]: 2025-11-29 08:49:19.033018076 +0000 UTC m=+0.098286287 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:49:19 np0005539552 podman[323891]: 2025-11-29 08:49:19.044125275 +0000 UTC m=+0.108675697 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:19 np0005539552 podman[323893]: 2025-11-29 08:49:19.08407454 +0000 UTC m=+0.141617263 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:49:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:19.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:19.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:19Z|00131|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.10
Nov 29 03:49:19 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:19Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:86:33:b7 10.100.0.10
Nov 29 03:49:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:20.656 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:20.656 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:20.656 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:21.125 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:21 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:21.127 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:49:21 np0005539552 nova_compute[233724]: 2025-11-29 08:49:21.152 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:21.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:21.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:22 np0005539552 nova_compute[233724]: 2025-11-29 08:49:22.567 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:22Z|00133|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.10
Nov 29 03:49:22 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:22Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:86:33:b7 10.100.0.10
Nov 29 03:49:23 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:23.130 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:23 np0005539552 nova_compute[233724]: 2025-11-29 08:49:23.237 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:23.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:23.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:24Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:33:b7 10.100.0.10
Nov 29 03:49:24 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:24Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:33:b7 10.100.0.10
Nov 29 03:49:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:25.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:27.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:27 np0005539552 nova_compute[233724]: 2025-11-29 08:49:27.571 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:28 np0005539552 nova_compute[233724]: 2025-11-29 08:49:28.240 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:29.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:31.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:32Z|00954|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:49:32 np0005539552 nova_compute[233724]: 2025-11-29 08:49:32.628 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:33 np0005539552 nova_compute[233724]: 2025-11-29 08:49:33.242 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:33.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:33.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:35.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:37.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:37.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.630 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.741 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.742 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.742 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.743 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.743 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.744 233728 INFO nova.compute.manager [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Terminating instance#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.746 233728 DEBUG nova.compute.manager [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:49:37 np0005539552 kernel: tape338bd74-7e (unregistering): left promiscuous mode
Nov 29 03:49:37 np0005539552 NetworkManager[48926]: <info>  [1764406177.8053] device (tape338bd74-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:49:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:37Z|00955|binding|INFO|Releasing lport e338bd74-7e99-4c3b-b207-58250b25c38f from this chassis (sb_readonly=0)
Nov 29 03:49:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:37Z|00956|binding|INFO|Setting lport e338bd74-7e99-4c3b-b207-58250b25c38f down in Southbound
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:37Z|00957|binding|INFO|Removing iface tape338bd74-7e ovn-installed in OVS
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.824 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:37Z|00958|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.831 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:33:b7 10.100.0.10'], port_security=['fa:16:3e:86:33:b7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ae60306-1870-41f5-b85a-a161040086d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'feea90f3-6a6c-4015-9759-137780c8bc82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=e338bd74-7e99-4c3b-b207-58250b25c38f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.832 143400 INFO neutron.agent.ovn.metadata.agent [-] Port e338bd74-7e99-4c3b-b207-58250b25c38f in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.834 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.852 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.861 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5d593c63-308d-4aa9-a295-789cef2cf62e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.895 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e263eeea-ebf5-402a-9c8e-778c3e33ca82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.899 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc40979-1ab4-4d91-b3ee-748ee2656203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:37 np0005539552 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000db.scope: Deactivated successfully.
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.905 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000db.scope: Consumed 20.573s CPU time.
Nov 29 03:49:37 np0005539552 systemd-machined[196379]: Machine qemu-96-instance-000000db terminated.
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.936 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[054abbeb-9170-426c-b292-7c5c71ad31d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.956 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[34d41f46-612e-4586-aef4-adabe213d5cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913216, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324027, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.983 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[112b4324-2ff4-4d81-a35c-02e6d2091ece]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3d510715-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913227, 'tstamp': 913227}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324030, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3d510715-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913230, 'tstamp': 913230}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324030, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.986 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.989 233728 INFO nova.virt.libvirt.driver [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Instance destroyed successfully.#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.989 233728 DEBUG nova.objects.instance [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 4ae60306-1870-41f5-b85a-a161040086d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:37 np0005539552 nova_compute[233724]: 2025-11-29 08:49:37.993 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.994 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.994 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.995 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:37 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:37.995 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.001 233728 DEBUG nova.virt.libvirt.vif [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-2032979511',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-2032979511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-2032979511',id=219,image_ref='ad73a146-3d66-454f-a206-ab6de9891225',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHenUT6iGrnE9AB32DOrySXAPbsGPEtql9EKmrnf6MOAPWmH02lhXICxW/2e7K4ydF7kYn3naX0r/9C2daNU92VDgvTLNX5UyvyBfPhzHHIeWdGcXa5BKyhPMXQSaxAflw==',key_name='tempest-keypair-35824695',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-6h3dk8dq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1614567902',image_owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=4ae60306-1870-41f5-b85a-a161040086d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.002 233728 DEBUG nova.network.os_vif_util [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "e338bd74-7e99-4c3b-b207-58250b25c38f", "address": "fa:16:3e:86:33:b7", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape338bd74-7e", "ovs_interfaceid": "e338bd74-7e99-4c3b-b207-58250b25c38f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.003 233728 DEBUG nova.network.os_vif_util [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.004 233728 DEBUG os_vif [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.006 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.006 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape338bd74-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.008 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.010 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.015 233728 INFO os_vif [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:33:b7,bridge_name='br-int',has_traffic_filtering=True,id=e338bd74-7e99-4c3b-b207-58250b25c38f,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape338bd74-7e')#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.067 233728 DEBUG nova.compute.manager [req-ffbd3a5e-c9b5-4205-a015-4d8165cfdb38 req-8f904b30-e801-42f9-9ba7-e55eb60ea3e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-vif-unplugged-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.067 233728 DEBUG oslo_concurrency.lockutils [req-ffbd3a5e-c9b5-4205-a015-4d8165cfdb38 req-8f904b30-e801-42f9-9ba7-e55eb60ea3e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.068 233728 DEBUG oslo_concurrency.lockutils [req-ffbd3a5e-c9b5-4205-a015-4d8165cfdb38 req-8f904b30-e801-42f9-9ba7-e55eb60ea3e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.068 233728 DEBUG oslo_concurrency.lockutils [req-ffbd3a5e-c9b5-4205-a015-4d8165cfdb38 req-8f904b30-e801-42f9-9ba7-e55eb60ea3e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.068 233728 DEBUG nova.compute.manager [req-ffbd3a5e-c9b5-4205-a015-4d8165cfdb38 req-8f904b30-e801-42f9-9ba7-e55eb60ea3e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] No waiting events found dispatching network-vif-unplugged-e338bd74-7e99-4c3b-b207-58250b25c38f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.069 233728 DEBUG nova.compute.manager [req-ffbd3a5e-c9b5-4205-a015-4d8165cfdb38 req-8f904b30-e801-42f9-9ba7-e55eb60ea3e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-vif-unplugged-e338bd74-7e99-4c3b-b207-58250b25c38f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.220 233728 INFO nova.virt.libvirt.driver [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Deleting instance files /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0_del#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.221 233728 INFO nova.virt.libvirt.driver [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Deletion of /var/lib/nova/instances/4ae60306-1870-41f5-b85a-a161040086d0_del complete#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.292 233728 INFO nova.compute.manager [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.293 233728 DEBUG oslo.service.loopingcall [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.294 233728 DEBUG nova.compute.manager [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:49:38 np0005539552 nova_compute[233724]: 2025-11-29 08:49:38.294 233728 DEBUG nova.network.neutron [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:49:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:49:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297540775' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:49:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:49:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297540775' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:49:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:39.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:39.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539552 nova_compute[233724]: 2025-11-29 08:49:39.522 233728 DEBUG nova.network.neutron [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:39 np0005539552 nova_compute[233724]: 2025-11-29 08:49:39.544 233728 INFO nova.compute.manager [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:49:39 np0005539552 nova_compute[233724]: 2025-11-29 08:49:39.644 233728 DEBUG nova.compute.manager [req-063c492f-08c9-439b-a3d6-230fc7ece68d req-58da8d0d-275b-4a0e-aa88-2b5a4b74c62b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-vif-deleted-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:39 np0005539552 nova_compute[233724]: 2025-11-29 08:49:39.888 233728 INFO nova.compute.manager [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Took 0.34 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:49:39 np0005539552 nova_compute[233724]: 2025-11-29 08:49:39.889 233728 DEBUG nova.compute.manager [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Deleting volume: 325ae6fa-18bb-47ad-9db3-2501a8dfae90 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.152 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.153 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.299 233728 DEBUG oslo_concurrency.processutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.596 233728 DEBUG nova.compute.manager [req-69bf5bef-06ef-493e-87c0-23e5090360de req-ce71c186-f941-46e6-94ed-53b2b079b752 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.598 233728 DEBUG oslo_concurrency.lockutils [req-69bf5bef-06ef-493e-87c0-23e5090360de req-ce71c186-f941-46e6-94ed-53b2b079b752 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4ae60306-1870-41f5-b85a-a161040086d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.598 233728 DEBUG oslo_concurrency.lockutils [req-69bf5bef-06ef-493e-87c0-23e5090360de req-ce71c186-f941-46e6-94ed-53b2b079b752 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.599 233728 DEBUG oslo_concurrency.lockutils [req-69bf5bef-06ef-493e-87c0-23e5090360de req-ce71c186-f941-46e6-94ed-53b2b079b752 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.599 233728 DEBUG nova.compute.manager [req-69bf5bef-06ef-493e-87c0-23e5090360de req-ce71c186-f941-46e6-94ed-53b2b079b752 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] No waiting events found dispatching network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.600 233728 WARNING nova.compute.manager [req-69bf5bef-06ef-493e-87c0-23e5090360de req-ce71c186-f941-46e6-94ed-53b2b079b752 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Received unexpected event network-vif-plugged-e338bd74-7e99-4c3b-b207-58250b25c38f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:49:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:40 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3789119876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.781 233728 DEBUG oslo_concurrency.processutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.789 233728 DEBUG nova.compute.provider_tree [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.825 233728 DEBUG nova.scheduler.client.report [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.880 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.919 233728 INFO nova.scheduler.client.report [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 4ae60306-1870-41f5-b85a-a161040086d0#033[00m
Nov 29 03:49:40 np0005539552 nova_compute[233724]: 2025-11-29 08:49:40.999 233728 DEBUG oslo_concurrency.lockutils [None req-baf11db7-a95a-451b-8fa0-c3434c67f21e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "4ae60306-1870-41f5-b85a-a161040086d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:49:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923483127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:49:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:49:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923483127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:49:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:41.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:42 np0005539552 nova_compute[233724]: 2025-11-29 08:49:42.632 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:43 np0005539552 nova_compute[233724]: 2025-11-29 08:49:43.007 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 03:49:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.211 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.212 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.212 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.212 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.213 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.214 233728 INFO nova.compute.manager [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Terminating instance#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.215 233728 DEBUG nova.compute.manager [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:49:44 np0005539552 kernel: tapcdeb0376-ce (unregistering): left promiscuous mode
Nov 29 03:49:44 np0005539552 NetworkManager[48926]: <info>  [1764406184.2814] device (tapcdeb0376-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.296 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:44Z|00959|binding|INFO|Releasing lport cdeb0376-cedf-4745-964f-897685f6d3de from this chassis (sb_readonly=0)
Nov 29 03:49:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:44Z|00960|binding|INFO|Setting lport cdeb0376-cedf-4745-964f-897685f6d3de down in Southbound
Nov 29 03:49:44 np0005539552 ovn_controller[133798]: 2025-11-29T08:49:44Z|00961|binding|INFO|Removing iface tapcdeb0376-ce ovn-installed in OVS
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.299 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.311 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c2:41 10.100.0.11'], port_security=['fa:16:3e:d2:c2:41 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e6ec5c0-7d7c-41e0-9f48-9e526008b00c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b482e439-6d52-4603-bdd4-81ab3cca06ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=cdeb0376-cedf-4745-964f-897685f6d3de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.313 143400 INFO neutron.agent.ovn.metadata.agent [-] Port cdeb0376-cedf-4745-964f-897685f6d3de in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.316 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.317 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[27fbe064-842b-4dcb-b0e8-adf4148408f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.318 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace which is not needed anymore#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.338 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Nov 29 03:49:44 np0005539552 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d9.scope: Consumed 18.000s CPU time.
Nov 29 03:49:44 np0005539552 systemd-machined[196379]: Machine qemu-95-instance-000000d9 terminated.
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.456 233728 INFO nova.virt.libvirt.driver [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Instance destroyed successfully.#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.456 233728 DEBUG nova.objects.instance [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.469 233728 DEBUG nova.virt.libvirt.vif [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-616866148',display_name='tempest-TestVolumeBootPattern-volume-backed-server-616866148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-616866148',id=217,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ulNScxkLomT3KlqY7WgNFKNe+0JBK36+ZAjL1G+wn5E1JZQQEO2iNWUZiebxVVxIzMjtJPgpywxuPu+VOiXPU2+U2uS7TbUlakOcF8J45n6B4bN1PXCw5j3OOEq1iw==',key_name='tempest-keypair-1098426915',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:48:07Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-h27l0p4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=8e6ec5c0-7d7c-41e0-9f48-9e526008b00c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.470 233728 DEBUG nova.network.os_vif_util [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "cdeb0376-cedf-4745-964f-897685f6d3de", "address": "fa:16:3e:d2:c2:41", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdeb0376-ce", "ovs_interfaceid": "cdeb0376-cedf-4745-964f-897685f6d3de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.471 233728 DEBUG nova.network.os_vif_util [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.472 233728 DEBUG os_vif [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.473 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.473 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdeb0376-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.475 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.480 233728 INFO os_vif [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:c2:41,bridge_name='br-int',has_traffic_filtering=True,id=cdeb0376-cedf-4745-964f-897685f6d3de,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdeb0376-ce')#033[00m
Nov 29 03:49:44 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [NOTICE]   (322960) : haproxy version is 2.8.14-c23fe91
Nov 29 03:49:44 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [NOTICE]   (322960) : path to executable is /usr/sbin/haproxy
Nov 29 03:49:44 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [WARNING]  (322960) : Exiting Master process...
Nov 29 03:49:44 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [WARNING]  (322960) : Exiting Master process...
Nov 29 03:49:44 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [ALERT]    (322960) : Current worker (322962) exited with code 143 (Terminated)
Nov 29 03:49:44 np0005539552 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[322956]: [WARNING]  (322960) : All workers exited. Exiting... (0)
Nov 29 03:49:44 np0005539552 systemd[1]: libpod-ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517.scope: Deactivated successfully.
Nov 29 03:49:44 np0005539552 podman[324114]: 2025-11-29 08:49:44.521233523 +0000 UTC m=+0.064847497 container died ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:49:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517-userdata-shm.mount: Deactivated successfully.
Nov 29 03:49:44 np0005539552 systemd[1]: var-lib-containers-storage-overlay-853f6491efbfa599397c83579570cb7247e8205e358b52dbdd102880dd950411-merged.mount: Deactivated successfully.
Nov 29 03:49:44 np0005539552 podman[324114]: 2025-11-29 08:49:44.583061257 +0000 UTC m=+0.126675261 container cleanup ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.586 233728 DEBUG nova.compute.manager [req-a68fc4c5-e59a-4701-9d92-6401e4421b38 req-4c9b6f7a-1c5c-4a9b-9fe0-d89edca8751f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-vif-unplugged-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.586 233728 DEBUG oslo_concurrency.lockutils [req-a68fc4c5-e59a-4701-9d92-6401e4421b38 req-4c9b6f7a-1c5c-4a9b-9fe0-d89edca8751f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.587 233728 DEBUG oslo_concurrency.lockutils [req-a68fc4c5-e59a-4701-9d92-6401e4421b38 req-4c9b6f7a-1c5c-4a9b-9fe0-d89edca8751f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.587 233728 DEBUG oslo_concurrency.lockutils [req-a68fc4c5-e59a-4701-9d92-6401e4421b38 req-4c9b6f7a-1c5c-4a9b-9fe0-d89edca8751f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.587 233728 DEBUG nova.compute.manager [req-a68fc4c5-e59a-4701-9d92-6401e4421b38 req-4c9b6f7a-1c5c-4a9b-9fe0-d89edca8751f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] No waiting events found dispatching network-vif-unplugged-cdeb0376-cedf-4745-964f-897685f6d3de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.587 233728 DEBUG nova.compute.manager [req-a68fc4c5-e59a-4701-9d92-6401e4421b38 req-4c9b6f7a-1c5c-4a9b-9fe0-d89edca8751f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-vif-unplugged-cdeb0376-cedf-4745-964f-897685f6d3de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:49:44 np0005539552 systemd[1]: libpod-conmon-ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517.scope: Deactivated successfully.
Nov 29 03:49:44 np0005539552 podman[324170]: 2025-11-29 08:49:44.677578321 +0000 UTC m=+0.056895212 container remove ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.684 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1e181ac6-0ed6-41a7-a82d-5da06b97a0e7]: (4, ('Sat Nov 29 08:49:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517)\nab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517\nSat Nov 29 08:49:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (ab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517)\nab74ab9efdfc8067766b0f9dd4e1ea89c0b8e476ea98c79d00f72fe66f3a7517\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.686 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e374f9-e143-474d-9f2d-5da3724d1d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.686 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:44 np0005539552 kernel: tap3d510715-d0: left promiscuous mode
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.689 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.706 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.710 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5690a788-81e8-4e67-98b2-ce61dfe4b011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.726 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[92bb3646-8634-4311-862f-9633b5732714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.728 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc2e030-919c-4775-bb58-1d1fae694e92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.743 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[5158e481-6d07-4401-9274-07c693260970]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913209, 'reachable_time': 36733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324184, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.746 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:49:44 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:49:44.746 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1b9bda-148d-4143-8cbe-cf332ccdb39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:44 np0005539552 systemd[1]: run-netns-ovnmeta\x2d3d510715\x2ddc99\x2d4870\x2d8ae9\x2dff599ae1a9c2.mount: Deactivated successfully.
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.750 233728 INFO nova.virt.libvirt.driver [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Deleting instance files /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_del#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.750 233728 INFO nova.virt.libvirt.driver [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Deletion of /var/lib/nova/instances/8e6ec5c0-7d7c-41e0-9f48-9e526008b00c_del complete#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.817 233728 INFO nova.compute.manager [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.819 233728 DEBUG oslo.service.loopingcall [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.820 233728 DEBUG nova.compute.manager [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:49:44 np0005539552 nova_compute[233724]: 2025-11-29 08:49:44.820 233728 DEBUG nova.network.neutron [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:49:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:45.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:45 np0005539552 nova_compute[233724]: 2025-11-29 08:49:45.884 233728 DEBUG nova.network.neutron [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:45 np0005539552 nova_compute[233724]: 2025-11-29 08:49:45.908 233728 INFO nova.compute.manager [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.014 233728 DEBUG nova.compute.manager [req-637057c5-23fc-46db-82a3-0a680ba10462 req-1b677aa4-3864-4d31-b222-4c13e567e036 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-vif-deleted-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.168 233728 INFO nova.compute.manager [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Took 0.26 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.170 233728 DEBUG nova.compute.manager [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Deleting volume: bf7e0110-334f-4129-a24d-43eb68f20833 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.431 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.431 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.479 233728 DEBUG oslo_concurrency.processutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.721 233728 DEBUG nova.compute.manager [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.722 233728 DEBUG oslo_concurrency.lockutils [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.723 233728 DEBUG oslo_concurrency.lockutils [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.723 233728 DEBUG oslo_concurrency.lockutils [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.724 233728 DEBUG nova.compute.manager [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] No waiting events found dispatching network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.724 233728 WARNING nova.compute.manager [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Received unexpected event network-vif-plugged-cdeb0376-cedf-4745-964f-897685f6d3de for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:49:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1453827235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.956 233728 DEBUG oslo_concurrency.processutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.965 233728 DEBUG nova.compute.provider_tree [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:46 np0005539552 nova_compute[233724]: 2025-11-29 08:49:46.983 233728 DEBUG nova.scheduler.client.report [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.015 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.042 233728 INFO nova.scheduler.client.report [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c#033[00m
Nov 29 03:49:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:49:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482592701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.104 233728 DEBUG oslo_concurrency.lockutils [None req-b3d4ac6f-3a57-4a93-a527-c2f8ebc6ceb1 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "8e6ec5c0-7d7c-41e0-9f48-9e526008b00c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:49:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482592701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:49:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.635 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.958 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.958 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.959 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.959 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:49:47 np0005539552 nova_compute[233724]: 2025-11-29 08:49:47.959 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/438301169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.416 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.617 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.619 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4110MB free_disk=20.988113403320312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.619 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.619 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.725 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.725 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:49:48 np0005539552 nova_compute[233724]: 2025-11-29 08:49:48.753 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2996722326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:49 np0005539552 nova_compute[233724]: 2025-11-29 08:49:49.233 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:49 np0005539552 nova_compute[233724]: 2025-11-29 08:49:49.241 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:49 np0005539552 nova_compute[233724]: 2025-11-29 08:49:49.261 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:49 np0005539552 nova_compute[233724]: 2025-11-29 08:49:49.292 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:49:49 np0005539552 nova_compute[233724]: 2025-11-29 08:49:49.293 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:49 np0005539552 nova_compute[233724]: 2025-11-29 08:49:49.478 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:49.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:49 np0005539552 podman[324255]: 2025-11-29 08:49:49.993357088 +0000 UTC m=+0.078302359 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:49:50 np0005539552 podman[324256]: 2025-11-29 08:49:50.000316885 +0000 UTC m=+0.074318661 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:49:50 np0005539552 podman[324257]: 2025-11-29 08:49:50.052432138 +0000 UTC m=+0.122679343 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:49:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 03:49:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:51.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:52 np0005539552 nova_compute[233724]: 2025-11-29 08:49:52.639 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:52 np0005539552 nova_compute[233724]: 2025-11-29 08:49:52.986 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406177.9844103, 4ae60306-1870-41f5-b85a-a161040086d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:52 np0005539552 nova_compute[233724]: 2025-11-29 08:49:52.986 233728 INFO nova.compute.manager [-] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:53 np0005539552 nova_compute[233724]: 2025-11-29 08:49:53.006 233728 DEBUG nova.compute.manager [None req-57144e77-fcc5-4c48-ae28-f266097a579e - - - - - -] [instance: 4ae60306-1870-41f5-b85a-a161040086d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:54 np0005539552 nova_compute[233724]: 2025-11-29 08:49:54.482 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:55.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:55.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 03:49:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:49:56 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:57 np0005539552 nova_compute[233724]: 2025-11-29 08:49:57.294 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:57 np0005539552 nova_compute[233724]: 2025-11-29 08:49:57.295 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:57 np0005539552 nova_compute[233724]: 2025-11-29 08:49:57.295 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:49:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:57 np0005539552 nova_compute[233724]: 2025-11-29 08:49:57.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:59 np0005539552 nova_compute[233724]: 2025-11-29 08:49:59.454 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406184.4527314, 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:59 np0005539552 nova_compute[233724]: 2025-11-29 08:49:59.455 233728 INFO nova.compute.manager [-] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:59 np0005539552 nova_compute[233724]: 2025-11-29 08:49:59.485 233728 DEBUG nova.compute.manager [None req-63ea5d63-0d87-4737-a101-f562f3a53324 - - - - - -] [instance: 8e6ec5c0-7d7c-41e0-9f48-9e526008b00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:59 np0005539552 nova_compute[233724]: 2025-11-29 08:49:59.485 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:49:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 03:49:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:59.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:59 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:59.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:59 np0005539552 nova_compute[233724]: 2025-11-29 08:49:59.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:59 np0005539552 nova_compute[233724]: 2025-11-29 08:49:59.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 03:50:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:01 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:01.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:01 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 03:50:01 np0005539552 nova_compute[233724]: 2025-11-29 08:50:01.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:02 np0005539552 nova_compute[233724]: 2025-11-29 08:50:02.645 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 03:50:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:03 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:03.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:04 np0005539552 nova_compute[233724]: 2025-11-29 08:50:04.487 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:04 np0005539552 nova_compute[233724]: 2025-11-29 08:50:04.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:05 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:05.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:07.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:07 np0005539552 nova_compute[233724]: 2025-11-29 08:50:07.650 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:07 np0005539552 nova_compute[233724]: 2025-11-29 08:50:07.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:08 np0005539552 nova_compute[233724]: 2025-11-29 08:50:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:08 np0005539552 nova_compute[233724]: 2025-11-29 08:50:08.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:50:08 np0005539552 nova_compute[233724]: 2025-11-29 08:50:08.946 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:50:09 np0005539552 nova_compute[233724]: 2025-11-29 08:50:09.490 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:09.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:09.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:50:10 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063182756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:50:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:11.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:11.612 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:11 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:11.612 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:11 np0005539552 nova_compute[233724]: 2025-11-29 08:50:11.615 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:12 np0005539552 nova_compute[233724]: 2025-11-29 08:50:12.652 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:13.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:13.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:14 np0005539552 nova_compute[233724]: 2025-11-29 08:50:14.492 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:15.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:15.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:17.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:17.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:17 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:17.614 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:17 np0005539552 nova_compute[233724]: 2025-11-29 08:50:17.655 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:19 np0005539552 nova_compute[233724]: 2025-11-29 08:50:19.494 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:19.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:19.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:20.657 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:20.657 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:20.658 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:21 np0005539552 podman[324624]: 2025-11-29 08:50:21.006440454 +0000 UTC m=+0.093539709 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:21 np0005539552 podman[324623]: 2025-11-29 08:50:21.014669706 +0000 UTC m=+0.098874863 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:21 np0005539552 podman[324625]: 2025-11-29 08:50:21.043390649 +0000 UTC m=+0.127052531 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:50:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:21.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:21.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:22 np0005539552 nova_compute[233724]: 2025-11-29 08:50:22.657 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:23.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:24 np0005539552 nova_compute[233724]: 2025-11-29 08:50:24.496 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:24 np0005539552 nova_compute[233724]: 2025-11-29 08:50:24.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:24 np0005539552 nova_compute[233724]: 2025-11-29 08:50:24.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:50:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:25.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:25 np0005539552 nova_compute[233724]: 2025-11-29 08:50:25.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:27.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:27.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:27 np0005539552 nova_compute[233724]: 2025-11-29 08:50:27.659 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:29 np0005539552 nova_compute[233724]: 2025-11-29 08:50:29.498 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:29.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:31.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:31.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:32 np0005539552 nova_compute[233724]: 2025-11-29 08:50:32.662 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:34 np0005539552 nova_compute[233724]: 2025-11-29 08:50:34.500 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:35.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539552 nova_compute[233724]: 2025-11-29 08:50:35.866 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:36 np0005539552 nova_compute[233724]: 2025-11-29 08:50:36.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:37.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:37 np0005539552 nova_compute[233724]: 2025-11-29 08:50:37.663 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:39 np0005539552 nova_compute[233724]: 2025-11-29 08:50:39.504 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:39.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:39.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:41.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:41.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:42 np0005539552 nova_compute[233724]: 2025-11-29 08:50:42.666 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:43.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:43.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:44 np0005539552 nova_compute[233724]: 2025-11-29 08:50:44.506 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:50:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:50:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:47.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:47 np0005539552 nova_compute[233724]: 2025-11-29 08:50:47.668 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:48 np0005539552 nova_compute[233724]: 2025-11-29 08:50:48.943 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:48 np0005539552 nova_compute[233724]: 2025-11-29 08:50:48.968 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:48 np0005539552 nova_compute[233724]: 2025-11-29 08:50:48.968 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:48 np0005539552 nova_compute[233724]: 2025-11-29 08:50:48.969 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:48 np0005539552 nova_compute[233724]: 2025-11-29 08:50:48.969 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:50:48 np0005539552 nova_compute[233724]: 2025-11-29 08:50:48.970 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1989317683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.412 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.509 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:49.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.660 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.662 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4133MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.663 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.663 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.758 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.758 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:50:49 np0005539552 nova_compute[233724]: 2025-11-29 08:50:49.790 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4108711389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:50 np0005539552 nova_compute[233724]: 2025-11-29 08:50:50.280 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:50 np0005539552 nova_compute[233724]: 2025-11-29 08:50:50.288 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:50 np0005539552 nova_compute[233724]: 2025-11-29 08:50:50.309 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:50 np0005539552 nova_compute[233724]: 2025-11-29 08:50:50.336 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:50:50 np0005539552 nova_compute[233724]: 2025-11-29 08:50:50.337 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:51.221 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:51 np0005539552 nova_compute[233724]: 2025-11-29 08:50:51.222 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:51 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:50:51.223 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.289138) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251289230, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1292, "num_deletes": 253, "total_data_size": 2724089, "memory_usage": 2758928, "flush_reason": "Manual Compaction"}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251305703, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 1796726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72371, "largest_seqno": 73658, "table_properties": {"data_size": 1791102, "index_size": 2954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12452, "raw_average_key_size": 20, "raw_value_size": 1779702, "raw_average_value_size": 2893, "num_data_blocks": 131, "num_entries": 615, "num_filter_entries": 615, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406151, "oldest_key_time": 1764406151, "file_creation_time": 1764406251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 16650 microseconds, and 8095 cpu microseconds.
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.305791) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 1796726 bytes OK
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.305818) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.307708) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.307730) EVENT_LOG_v1 {"time_micros": 1764406251307723, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.307754) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2717948, prev total WAL file size 2717948, number of live WAL files 2.
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.309211) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(1754KB)], [147(11MB)]
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251309319, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14337788, "oldest_snapshot_seqno": -1}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 10191 keys, 12486857 bytes, temperature: kUnknown
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251428092, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12486857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12421714, "index_size": 38534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 270060, "raw_average_key_size": 26, "raw_value_size": 12243468, "raw_average_value_size": 1201, "num_data_blocks": 1458, "num_entries": 10191, "num_filter_entries": 10191, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.428468) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12486857 bytes
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.430065) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.6 rd, 105.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 12.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.9) write-amplify(6.9) OK, records in: 10714, records dropped: 523 output_compression: NoCompression
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.430085) EVENT_LOG_v1 {"time_micros": 1764406251430074, "job": 94, "event": "compaction_finished", "compaction_time_micros": 118844, "compaction_time_cpu_micros": 65166, "output_level": 6, "num_output_files": 1, "total_output_size": 12486857, "num_input_records": 10714, "num_output_records": 10191, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251430638, "job": 94, "event": "table_file_deletion", "file_number": 149}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251433131, "job": 94, "event": "table_file_deletion", "file_number": 147}
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.308939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.433238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.433243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.433244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.433246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:50:51.433247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:50:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:50:51 np0005539552 podman[324797]: 2025-11-29 08:50:51.99325504 +0000 UTC m=+0.072970975 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 03:50:52 np0005539552 podman[324796]: 2025-11-29 08:50:52.04785511 +0000 UTC m=+0.122492858 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:50:52 np0005539552 podman[324798]: 2025-11-29 08:50:52.055962388 +0000 UTC m=+0.124894733 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:50:52 np0005539552 nova_compute[233724]: 2025-11-29 08:50:52.670 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:53.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:53.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:54 np0005539552 nova_compute[233724]: 2025-11-29 08:50:54.513 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:55.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:55 np0005539552 nova_compute[233724]: 2025-11-29 08:50:55.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:55 np0005539552 nova_compute[233724]: 2025-11-29 08:50:55.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:55 np0005539552 nova_compute[233724]: 2025-11-29 08:50:55.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:50:55 np0005539552 nova_compute[233724]: 2025-11-29 08:50:55.944 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:50:56 np0005539552 nova_compute[233724]: 2025-11-29 08:50:56.943 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:56 np0005539552 nova_compute[233724]: 2025-11-29 08:50:56.944 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:50:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:57.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:57 np0005539552 nova_compute[233724]: 2025-11-29 08:50:57.672 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:59 np0005539552 nova_compute[233724]: 2025-11-29 08:50:59.516 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:50:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:00 np0005539552 nova_compute[233724]: 2025-11-29 08:51:00.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:01 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:51:01.224 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:01.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:01 np0005539552 nova_compute[233724]: 2025-11-29 08:51:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:02 np0005539552 nova_compute[233724]: 2025-11-29 08:51:02.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:03.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:03 np0005539552 nova_compute[233724]: 2025-11-29 08:51:03.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:04 np0005539552 nova_compute[233724]: 2025-11-29 08:51:04.518 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:05.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:05 np0005539552 podman[325091]: 2025-11-29 08:51:05.59636526 +0000 UTC m=+0.077375454 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:05 np0005539552 podman[325091]: 2025-11-29 08:51:05.718119587 +0000 UTC m=+0.199129781 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:51:05 np0005539552 nova_compute[233724]: 2025-11-29 08:51:05.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:06 np0005539552 podman[325246]: 2025-11-29 08:51:06.493369505 +0000 UTC m=+0.149650389 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:51:06 np0005539552 podman[325246]: 2025-11-29 08:51:06.500547929 +0000 UTC m=+0.156828773 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 03:51:06 np0005539552 podman[325311]: 2025-11-29 08:51:06.83755202 +0000 UTC m=+0.064871987 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.expose-services=)
Nov 29 03:51:06 np0005539552 podman[325311]: 2025-11-29 08:51:06.854449125 +0000 UTC m=+0.081769122 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 03:51:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:07.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:07 np0005539552 nova_compute[233724]: 2025-11-29 08:51:07.676 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:51:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:08 np0005539552 nova_compute[233724]: 2025-11-29 08:51:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:08 np0005539552 nova_compute[233724]: 2025-11-29 08:51:08.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:51:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:08 np0005539552 nova_compute[233724]: 2025-11-29 08:51:08.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:51:08 np0005539552 nova_compute[233724]: 2025-11-29 08:51:08.941 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:51:09 np0005539552 nova_compute[233724]: 2025-11-29 08:51:09.520 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:09.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:09 np0005539552 nova_compute[233724]: 2025-11-29 08:51:09.936 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:11.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:12 np0005539552 nova_compute[233724]: 2025-11-29 08:51:12.678 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:13.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:14 np0005539552 nova_compute[233724]: 2025-11-29 08:51:14.522 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:15.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:16 np0005539552 nova_compute[233724]: 2025-11-29 08:51:16.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:17.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:17 np0005539552 nova_compute[233724]: 2025-11-29 08:51:17.679 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:19 np0005539552 nova_compute[233724]: 2025-11-29 08:51:19.525 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:19.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:51:20.659 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:51:20.660 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:51:20.660 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:21.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:22 np0005539552 nova_compute[233724]: 2025-11-29 08:51:22.680 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:22 np0005539552 podman[325598]: 2025-11-29 08:51:22.793099263 +0000 UTC m=+0.079050438 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:51:22 np0005539552 podman[325597]: 2025-11-29 08:51:22.806416292 +0000 UTC m=+0.092913232 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd)
Nov 29 03:51:22 np0005539552 podman[325599]: 2025-11-29 08:51:22.865476651 +0000 UTC m=+0.148355994 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:51:23 np0005539552 ovn_controller[133798]: 2025-11-29T08:51:23Z|00962|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 29 03:51:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:23.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:23 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:24 np0005539552 nova_compute[233724]: 2025-11-29 08:51:24.527 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:25.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:25.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:27.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:27.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:27 np0005539552 nova_compute[233724]: 2025-11-29 08:51:27.682 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:29 np0005539552 nova_compute[233724]: 2025-11-29 08:51:29.529 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:29.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:29.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:31.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:32 np0005539552 nova_compute[233724]: 2025-11-29 08:51:32.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:33.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:34 np0005539552 nova_compute[233724]: 2025-11-29 08:51:34.531 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:35.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:35.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:37.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:37.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:37 np0005539552 nova_compute[233724]: 2025-11-29 08:51:37.686 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:39 np0005539552 nova_compute[233724]: 2025-11-29 08:51:39.534 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:41.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:41.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:42 np0005539552 nova_compute[233724]: 2025-11-29 08:51:42.688 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:43.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:43.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 03:51:44 np0005539552 nova_compute[233724]: 2025-11-29 08:51:44.536 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:45.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:45.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:47.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:47.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:47 np0005539552 nova_compute[233724]: 2025-11-29 08:51:47.690 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.539 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:49.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:49.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.970 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.971 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.971 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.971 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:51:49 np0005539552 nova_compute[233724]: 2025-11-29 08:51:49.973 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3109596458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.463 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.668 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.669 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4131MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.669 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.669 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.745 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.745 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.758 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.804 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.805 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.844 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.894 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:51:50 np0005539552 nova_compute[233724]: 2025-11-29 08:51:50.926 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3141537053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:51 np0005539552 nova_compute[233724]: 2025-11-29 08:51:51.404 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:51 np0005539552 nova_compute[233724]: 2025-11-29 08:51:51.410 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:51.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:51.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:51 np0005539552 nova_compute[233724]: 2025-11-29 08:51:51.722 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:51 np0005539552 nova_compute[233724]: 2025-11-29 08:51:51.726 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:51:51 np0005539552 nova_compute[233724]: 2025-11-29 08:51:51.726 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:52 np0005539552 nova_compute[233724]: 2025-11-29 08:51:52.692 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:53 np0005539552 podman[325777]: 2025-11-29 08:51:53.015648117 +0000 UTC m=+0.091774261 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:51:53 np0005539552 podman[325778]: 2025-11-29 08:51:53.020763465 +0000 UTC m=+0.095790089 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 29 03:51:53 np0005539552 podman[325779]: 2025-11-29 08:51:53.030843716 +0000 UTC m=+0.108301496 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller)
Nov 29 03:51:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:53.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:53.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:54 np0005539552 nova_compute[233724]: 2025-11-29 08:51:54.542 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:55.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:57 np0005539552 nova_compute[233724]: 2025-11-29 08:51:57.695 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:57 np0005539552 nova_compute[233724]: 2025-11-29 08:51:57.727 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:57 np0005539552 nova_compute[233724]: 2025-11-29 08:51:57.728 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:57 np0005539552 nova_compute[233724]: 2025-11-29 08:51:57.728 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:51:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:51:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715180415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:51:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 03:51:59 np0005539552 nova_compute[233724]: 2025-11-29 08:51:59.545 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:51:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:59.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 03:52:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:01.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:01.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 03:52:01 np0005539552 nova_compute[233724]: 2025-11-29 08:52:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:02 np0005539552 nova_compute[233724]: 2025-11-29 08:52:02.697 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 03:52:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:03.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:03 np0005539552 nova_compute[233724]: 2025-11-29 08:52:03.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:03 np0005539552 nova_compute[233724]: 2025-11-29 08:52:03.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:04 np0005539552 nova_compute[233724]: 2025-11-29 08:52:04.548 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:05.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 03:52:06 np0005539552 nova_compute[233724]: 2025-11-29 08:52:06.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:07.016 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:07 np0005539552 nova_compute[233724]: 2025-11-29 08:52:07.016 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:07 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:07.017 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:52:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:07.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:07 np0005539552 nova_compute[233724]: 2025-11-29 08:52:07.699 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:09 np0005539552 nova_compute[233724]: 2025-11-29 08:52:09.550 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:09.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:09 np0005539552 nova_compute[233724]: 2025-11-29 08:52:09.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:09 np0005539552 nova_compute[233724]: 2025-11-29 08:52:09.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:52:09 np0005539552 nova_compute[233724]: 2025-11-29 08:52:09.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:52:10 np0005539552 nova_compute[233724]: 2025-11-29 08:52:10.001 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:52:10 np0005539552 nova_compute[233724]: 2025-11-29 08:52:10.996 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:11.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:12 np0005539552 nova_compute[233724]: 2025-11-29 08:52:12.701 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:13.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:14 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:14.019 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:14 np0005539552 nova_compute[233724]: 2025-11-29 08:52:14.553 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:15.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:17.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:17 np0005539552 nova_compute[233724]: 2025-11-29 08:52:17.703 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:17.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:52:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:19 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:52:19 np0005539552 nova_compute[233724]: 2025-11-29 08:52:19.555 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:19.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.126 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.126 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.141 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.248 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.249 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.264 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.264 233728 INFO nova.compute.claims [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.397 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:20.660 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:20.661 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:20.661 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/248894761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.795 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.804 233728 DEBUG nova.compute.provider_tree [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.825 233728 DEBUG nova.scheduler.client.report [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.858 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.859 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.916 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.916 233728 DEBUG nova.network.neutron [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.940 233728 INFO nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:52:20 np0005539552 nova_compute[233724]: 2025-11-29 08:52:20.962 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.010 233728 INFO nova.virt.block_device [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Booting with volume 08dcfaf9-1c01-4432-a48d-de20b4481b2d at /dev/vda#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.162 233728 DEBUG os_brick.utils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.164 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.181 243418 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.182 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[14ee2a2e-a449-4450-95c8-7bee7cccc0d5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.183 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.195 243418 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.195 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[9703ae92-b547-4a24-88f5-5e937e8b4985]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e73df9328', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.197 243418 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.210 243418 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.211 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[917c0e3f-2cab-4af3-ace3-ea3cd10149c9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.212 243418 DEBUG oslo.privsep.daemon [-] privsep: reply[8c95d4f8-309d-4079-ad28-2abef3a1d061]: (4, '6fbde64a-d978-4f1a-a29d-a77e1f5a1987') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.213 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.264 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CMD "nvme version" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.268 233728 DEBUG os_brick.initiator.connectors.lightos [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.269 233728 DEBUG os_brick.initiator.connectors.lightos [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.269 233728 DEBUG os_brick.initiator.connectors.lightos [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.270 233728 DEBUG os_brick.utils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] <== get_connector_properties: return (106ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e73df9328', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6fbde64a-d978-4f1a-a29d-a77e1f5a1987', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.270 233728 DEBUG nova.virt.block_device [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating existing volume attachment record: d1df4608-df42-4250-8634-2a455e298fce _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:52:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:21.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:21 np0005539552 nova_compute[233724]: 2025-11-29 08:52:21.766 233728 DEBUG nova.policy [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c3778685080b4955bd80ff7056a1c9f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '676f70280bf945cd90578b14202243e2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:52:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:21 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023178244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.325 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.326 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.327 233728 INFO nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Creating image(s)#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.327 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.328 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Ensure instance console log exists: /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.328 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.329 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.329 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:22 np0005539552 nova_compute[233724]: 2025-11-29 08:52:22.878 233728 DEBUG nova.network.neutron [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Successfully created port: ba9de23c-de81-495d-839f-d8ccb6604f76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.564 233728 DEBUG nova.network.neutron [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Successfully updated port: ba9de23c-de81-495d-839f-d8ccb6604f76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.580 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.580 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquired lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.580 233728 DEBUG nova.network.neutron [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.675 233728 DEBUG nova.compute.manager [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.676 233728 DEBUG nova.compute.manager [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing instance network info cache due to event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.676 233728 DEBUG oslo_concurrency.lockutils [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:23.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:23.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:23 np0005539552 nova_compute[233724]: 2025-11-29 08:52:23.759 233728 DEBUG nova.network.neutron [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:52:24 np0005539552 podman[326117]: 2025-11-29 08:52:23.998761654 +0000 UTC m=+0.078165925 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 03:52:24 np0005539552 podman[326116]: 2025-11-29 08:52:24.017801996 +0000 UTC m=+0.098963125 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:52:24 np0005539552 podman[326118]: 2025-11-29 08:52:24.049682974 +0000 UTC m=+0.122590300 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:52:24 np0005539552 nova_compute[233724]: 2025-11-29 08:52:24.558 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:24 np0005539552 nova_compute[233724]: 2025-11-29 08:52:24.995 233728 DEBUG nova.network.neutron [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.019 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Releasing lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.020 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Instance network_info: |[{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.021 233728 DEBUG oslo_concurrency.lockutils [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.021 233728 DEBUG nova.network.neutron [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.028 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Start _get_guest_xml network_info=[{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-08dcfaf9-1c01-4432-a48d-de20b4481b2d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '08dcfaf9-1c01-4432-a48d-de20b4481b2d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '16db2bb6-d5b2-4f00-8d36-e1ce814cd722', 'attached_at': '', 'detached_at': '', 'volume_id': '08dcfaf9-1c01-4432-a48d-de20b4481b2d', 'serial': '08dcfaf9-1c01-4432-a48d-de20b4481b2d'}, 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': 'd1df4608-df42-4250-8634-2a455e298fce', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.035 233728 WARNING nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.041 233728 DEBUG nova.virt.libvirt.host [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.043 233728 DEBUG nova.virt.libvirt.host [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.048 233728 DEBUG nova.virt.libvirt.host [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.048 233728 DEBUG nova.virt.libvirt.host [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.050 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.051 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.052 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.052 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.053 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.053 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.054 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.054 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.055 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.055 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.056 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.056 233728 DEBUG nova.virt.hardware [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.104 233728 DEBUG nova.storage.rbd_utils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] rbd image 16db2bb6-d5b2-4f00-8d36-e1ce814cd722_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.110 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3934000887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.591 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.628 233728 DEBUG nova.virt.libvirt.vif [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1124550516',display_name='tempest-TestVolumeBackupRestore-server-1124550516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1124550516',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM2GphRUms02m5j0ijPtPJ6dSAhJBYmhnLZQ34iieerlpfPtQ5Ielz3rGVcqEiO64fmIWRizL2oz7oz/HOrDliMGUWSt6M5a2mbVISLPL0UKAsnk/sS4qQNWCAUsKUE9Lw==',key_name='tempest-TestVolumeBackupRestore-1353448143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='676f70280bf945cd90578b14202243e2',ramdisk_id='',reservation_id='r-tt9a7f24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-35370002',owner_user_name='tempest-TestVolumeBackupRestore-35370002-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:20Z,user_data=None,user_id='c3778685080b4955bd80ff7056a1c9f2',uuid=16db2bb6-d5b2-4f00-8d36-e1ce814cd722,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.629 233728 DEBUG nova.network.os_vif_util [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Converting VIF {"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.630 233728 DEBUG nova.network.os_vif_util [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.631 233728 DEBUG nova.objects.instance [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16db2bb6-d5b2-4f00-8d36-e1ce814cd722 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.649 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <uuid>16db2bb6-d5b2-4f00-8d36-e1ce814cd722</uuid>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <name>instance-000000e0</name>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <memory>131072</memory>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <vcpu>1</vcpu>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <metadata>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <nova:name>tempest-TestVolumeBackupRestore-server-1124550516</nova:name>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <nova:creationTime>2025-11-29 08:52:25</nova:creationTime>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <nova:flavor name="m1.nano">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:memory>128</nova:memory>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:disk>1</nova:disk>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:swap>0</nova:swap>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </nova:flavor>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <nova:owner>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:user uuid="c3778685080b4955bd80ff7056a1c9f2">tempest-TestVolumeBackupRestore-35370002-project-member</nova:user>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:project uuid="676f70280bf945cd90578b14202243e2">tempest-TestVolumeBackupRestore-35370002</nova:project>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </nova:owner>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <nova:ports>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <nova:port uuid="ba9de23c-de81-495d-839f-d8ccb6604f76">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        </nova:port>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </nova:ports>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </nova:instance>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </metadata>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <sysinfo type="smbios">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <system>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <entry name="serial">16db2bb6-d5b2-4f00-8d36-e1ce814cd722</entry>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <entry name="uuid">16db2bb6-d5b2-4f00-8d36-e1ce814cd722</entry>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </system>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </sysinfo>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <os>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <boot dev="hd"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <smbios mode="sysinfo"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </os>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <features>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <acpi/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <apic/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <vmcoreinfo/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </features>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <clock offset="utc">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <timer name="hpet" present="no"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </clock>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <cpu mode="custom" match="exact">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <model>Nehalem</model>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </cpu>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  <devices>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <disk type="network" device="cdrom">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <driver type="raw" cache="none"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="vms/16db2bb6-d5b2-4f00-8d36-e1ce814cd722_disk.config">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <target dev="sda" bus="sata"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <disk type="network" device="disk">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <source protocol="rbd" name="volumes/volume-08dcfaf9-1c01-4432-a48d-de20b4481b2d">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </source>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <auth username="openstack">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      </auth>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <target dev="vda" bus="virtio"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <serial>08dcfaf9-1c01-4432-a48d-de20b4481b2d</serial>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </disk>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <interface type="ethernet">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <mac address="fa:16:3e:c8:0f:c4"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <mtu size="1442"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <target dev="tapba9de23c-de"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </interface>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <serial type="pty">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <log file="/var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/console.log" append="off"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </serial>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <video>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <model type="virtio"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </video>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <input type="tablet" bus="usb"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <rng model="virtio">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </rng>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <controller type="usb" index="0"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    <memballoon model="virtio">
Nov 29 03:52:25 np0005539552 nova_compute[233724]:      <stats period="10"/>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:    </memballoon>
Nov 29 03:52:25 np0005539552 nova_compute[233724]:  </devices>
Nov 29 03:52:25 np0005539552 nova_compute[233724]: </domain>
Nov 29 03:52:25 np0005539552 nova_compute[233724]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.649 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Preparing to wait for external event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.650 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.650 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.650 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.651 233728 DEBUG nova.virt.libvirt.vif [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1124550516',display_name='tempest-TestVolumeBackupRestore-server-1124550516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1124550516',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM2GphRUms02m5j0ijPtPJ6dSAhJBYmhnLZQ34iieerlpfPtQ5Ielz3rGVcqEiO64fmIWRizL2oz7oz/HOrDliMGUWSt6M5a2mbVISLPL0UKAsnk/sS4qQNWCAUsKUE9Lw==',key_name='tempest-TestVolumeBackupRestore-1353448143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='676f70280bf945cd90578b14202243e2',ramdisk_id='',reservation_id='r-tt9a7f24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-35370002',owner_user_name='tempest-TestVolumeBackupRestore-35370002-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:20Z,user_data=None,user_id='c3778685080b4955bd80ff7056a1c9f2',uuid=16db2bb6-d5b2-4f00-8d36-e1ce814cd722,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.651 233728 DEBUG nova.network.os_vif_util [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Converting VIF {"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.652 233728 DEBUG nova.network.os_vif_util [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.652 233728 DEBUG os_vif [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.653 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.653 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.654 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.658 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.658 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba9de23c-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.659 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba9de23c-de, col_values=(('external_ids', {'iface-id': 'ba9de23c-de81-495d-839f-d8ccb6604f76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:0f:c4', 'vm-uuid': '16db2bb6-d5b2-4f00-8d36-e1ce814cd722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.662 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:52:25 np0005539552 NetworkManager[48926]: <info>  [1764406345.6626] manager: (tapba9de23c-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.669 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.669 233728 INFO os_vif [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de')#033[00m
Nov 29 03:52:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:25.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.730 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.730 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.731 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] No VIF found with MAC fa:16:3e:c8:0f:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.731 233728 INFO nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Using config drive#033[00m
Nov 29 03:52:25 np0005539552 nova_compute[233724]: 2025-11-29 08:52:25.766 233728 DEBUG nova.storage.rbd_utils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] rbd image 16db2bb6-d5b2-4f00-8d36-e1ce814cd722_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.144 233728 INFO nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Creating config drive at /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/disk.config#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.154 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao2fa80q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.316 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpao2fa80q" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.366 233728 DEBUG nova.storage.rbd_utils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] rbd image 16db2bb6-d5b2-4f00-8d36-e1ce814cd722_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.372 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/disk.config 16db2bb6-d5b2-4f00-8d36-e1ce814cd722_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.574 233728 DEBUG oslo_concurrency.processutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/disk.config 16db2bb6-d5b2-4f00-8d36-e1ce814cd722_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.575 233728 INFO nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Deleting local config drive /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722/disk.config because it was imported into RBD.#033[00m
Nov 29 03:52:26 np0005539552 kernel: tapba9de23c-de: entered promiscuous mode
Nov 29 03:52:26 np0005539552 NetworkManager[48926]: <info>  [1764406346.6598] manager: (tapba9de23c-de): new Tun device (/org/freedesktop/NetworkManager/Devices/425)
Nov 29 03:52:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:26Z|00963|binding|INFO|Claiming lport ba9de23c-de81-495d-839f-d8ccb6604f76 for this chassis.
Nov 29 03:52:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:26Z|00964|binding|INFO|ba9de23c-de81-495d-839f-d8ccb6604f76: Claiming fa:16:3e:c8:0f:c4 10.100.0.4
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.671 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.677 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.694 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:0f:c4 10.100.0.4'], port_security=['fa:16:3e:c8:0f:c4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '16db2bb6-d5b2-4f00-8d36-e1ce814cd722', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '676f70280bf945cd90578b14202243e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5d0274c4-ca7d-485b-8cef-af2050593c6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7abd7f17-e2fc-4653-8b0a-6623396c53a0, chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=ba9de23c-de81-495d-839f-d8ccb6604f76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.696 143400 INFO neutron.agent.ovn.metadata.agent [-] Port ba9de23c-de81-495d-839f-d8ccb6604f76 in datapath 66c5f903-8626-4025-8a8f-4e0589b5aac5 bound to our chassis#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.698 143400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66c5f903-8626-4025-8a8f-4e0589b5aac5#033[00m
Nov 29 03:52:26 np0005539552 systemd-udevd[326342]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.721 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fccbed9e-b7ee-4784-ac3f-22ad025f8716]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.723 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66c5f903-81 in ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.725 239589 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66c5f903-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.725 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[17b266d1-8ecf-4525-850c-bfbe5af88545]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.726 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[116fe827-957c-440c-b728-94b0dc6d05c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 systemd-machined[196379]: New machine qemu-97-instance-000000e0.
Nov 29 03:52:26 np0005539552 NetworkManager[48926]: <info>  [1764406346.7395] device (tapba9de23c-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:52:26 np0005539552 NetworkManager[48926]: <info>  [1764406346.7422] device (tapba9de23c-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.749 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[c483fc05-bbe8-489f-885d-24f49bd55e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.766 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539552 systemd[1]: Started Virtual Machine qemu-97-instance-000000e0.
Nov 29 03:52:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:26Z|00965|binding|INFO|Setting lport ba9de23c-de81-495d-839f-d8ccb6604f76 ovn-installed in OVS
Nov 29 03:52:26 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:26Z|00966|binding|INFO|Setting lport ba9de23c-de81-495d-839f-d8ccb6604f76 up in Southbound
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.770 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.782 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[c91ea837-e7ac-45fa-aebe-9fd177a4ca6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.829 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[32fc6ec6-fbbe-48cd-a182-fbeeab8009d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.834 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[961a74e0-82a3-4977-bb3c-ab70489ff195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 systemd-udevd[326346]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:26 np0005539552 NetworkManager[48926]: <info>  [1764406346.8379] manager: (tap66c5f903-80): new Veth device (/org/freedesktop/NetworkManager/Devices/426)
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.882 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e7197895-96c2-4008-bbab-0974223599fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.886 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[e577bbc0-2c97-48c5-9d0e-8870966acc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 NetworkManager[48926]: <info>  [1764406346.9209] device (tap66c5f903-80): carrier: link connected
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.931 239674 DEBUG oslo.privsep.daemon [-] privsep: reply[9a77c2a6-3edb-4575-8d92-bdb4dbc10a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.958 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[a04cc8c8-6ca5-496e-b8a2-a7cbce19137b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66c5f903-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:7b:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 939260, 'reachable_time': 25729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326376, 'error': None, 'target': 'ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.963 233728 DEBUG nova.network.neutron [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updated VIF entry in instance network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.963 233728 DEBUG nova.network.neutron [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:26 np0005539552 nova_compute[233724]: 2025-11-29 08:52:26.977 233728 DEBUG oslo_concurrency.lockutils [req-5f11f505-b60b-4124-ba38-d980687e4bf7 req-5bd482f6-9785-48bb-820c-148fd3ce1279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:26 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.983 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3b6d73-12f6-4cf7-909c-53f8474050c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:7bbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 939260, 'tstamp': 939260}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326377, 'error': None, 'target': 'ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:26.999 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd18d78-9149-4bef-b0cc-a3473a03d9ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66c5f903-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:7b:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 939260, 'reachable_time': 25729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326378, 'error': None, 'target': 'ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.002 233728 DEBUG nova.compute.manager [req-684bce80-3575-407a-8e29-a952fb16796c req-f82208e1-ffbe-4973-9203-949606f737fb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.003 233728 DEBUG oslo_concurrency.lockutils [req-684bce80-3575-407a-8e29-a952fb16796c req-f82208e1-ffbe-4973-9203-949606f737fb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.003 233728 DEBUG oslo_concurrency.lockutils [req-684bce80-3575-407a-8e29-a952fb16796c req-f82208e1-ffbe-4973-9203-949606f737fb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.003 233728 DEBUG oslo_concurrency.lockutils [req-684bce80-3575-407a-8e29-a952fb16796c req-f82208e1-ffbe-4973-9203-949606f737fb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.003 233728 DEBUG nova.compute.manager [req-684bce80-3575-407a-8e29-a952fb16796c req-f82208e1-ffbe-4973-9203-949606f737fb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Processing event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.042 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4294d4-6ceb-42de-9c07-ca149c8a81f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.111 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[864638f9-d20e-4f61-9056-2eb61f095cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.113 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66c5f903-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.113 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.114 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66c5f903-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.115 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539552 kernel: tap66c5f903-80: entered promiscuous mode
Nov 29 03:52:27 np0005539552 NetworkManager[48926]: <info>  [1764406347.1167] manager: (tap66c5f903-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.119 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66c5f903-80, col_values=(('external_ids', {'iface-id': '66a16424-da27-4199-8a91-be26bf876405'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:27 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:27Z|00967|binding|INFO|Releasing lport 66a16424-da27-4199-8a91-be26bf876405 from this chassis (sb_readonly=0)
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.120 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.135 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.136 143400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66c5f903-8626-4025-8a8f-4e0589b5aac5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66c5f903-8626-4025-8a8f-4e0589b5aac5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.137 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[b73c92ec-a470-4751-be0e-7e50c562b763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.138 143400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: global
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    log         /dev/log local0 debug
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    log-tag     haproxy-metadata-proxy-66c5f903-8626-4025-8a8f-4e0589b5aac5
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    user        root
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    group       root
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    maxconn     1024
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    pidfile     /var/lib/neutron/external/pids/66c5f903-8626-4025-8a8f-4e0589b5aac5.pid.haproxy
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    daemon
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: defaults
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    log global
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    mode http
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    option httplog
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    option dontlognull
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    option http-server-close
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    option forwardfor
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    retries                 3
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    timeout http-request    30s
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    timeout connect         30s
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    timeout client          32s
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    timeout server          32s
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    timeout http-keep-alive 30s
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: listen listener
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    bind 169.254.169.254:80
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]:    http-request add-header X-OVN-Network-ID 66c5f903-8626-4025-8a8f-4e0589b5aac5
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:52:27 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:27.138 143400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'env', 'PROCESS_TAG=haproxy-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66c5f903-8626-4025-8a8f-4e0589b5aac5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.417 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406347.4169884, 16db2bb6-d5b2-4f00-8d36-e1ce814cd722 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.417 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] VM Started (Lifecycle Event)#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.419 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.423 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.426 233728 INFO nova.virt.libvirt.driver [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Instance spawned successfully.#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.426 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.438 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.442 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.452 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.452 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.453 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.453 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.453 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.454 233728 DEBUG nova.virt.libvirt.driver [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.477 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.477 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406347.417195, 16db2bb6-d5b2-4f00-8d36-e1ce814cd722 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.477 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.501 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.504 233728 DEBUG nova.virt.driver [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] Emitting event <LifecycleEvent: 1764406347.422368, 16db2bb6-d5b2-4f00-8d36-e1ce814cd722 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.504 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:52:27 np0005539552 podman[326452]: 2025-11-29 08:52:27.508208779 +0000 UTC m=+0.060685634 container create 5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.526 233728 INFO nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Took 5.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.527 233728 DEBUG nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.528 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.534 233728 DEBUG nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:27 np0005539552 systemd[1]: Started libpod-conmon-5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e.scope.
Nov 29 03:52:27 np0005539552 podman[326452]: 2025-11-29 08:52:27.474338218 +0000 UTC m=+0.026815103 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.572 233728 INFO nova.compute.manager [None req-7f977b5e-e514-459e-8cf3-3841b8c0cfac - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.599 233728 INFO nova.compute.manager [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Took 7.40 seconds to build instance.#033[00m
Nov 29 03:52:27 np0005539552 systemd[1]: Started libcrun container.
Nov 29 03:52:27 np0005539552 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccaf5d3ff08d6677d9c9810f173387b1a1a23a51727c84f3e0ab617f01d5457/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:27 np0005539552 podman[326452]: 2025-11-29 08:52:27.64381941 +0000 UTC m=+0.196296285 container init 5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:52:27 np0005539552 podman[326452]: 2025-11-29 08:52:27.651864286 +0000 UTC m=+0.204341141 container start 5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:52:27 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [NOTICE]   (326472) : New worker (326474) forked
Nov 29 03:52:27 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [NOTICE]   (326472) : Loading success.
Nov 29 03:52:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:27.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.681 233728 DEBUG oslo_concurrency.lockutils [None req-1a3255ce-9b0f-44b3-8550-af2911893a74 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:27 np0005539552 nova_compute[233724]: 2025-11-29 08:52:27.707 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:27.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:29 np0005539552 nova_compute[233724]: 2025-11-29 08:52:29.115 233728 DEBUG nova.compute.manager [req-6a72fa49-0729-4490-a3a5-2cd938d5c92b req-c39d3d29-1594-4834-b2ce-b4f3eb97da3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:29 np0005539552 nova_compute[233724]: 2025-11-29 08:52:29.115 233728 DEBUG oslo_concurrency.lockutils [req-6a72fa49-0729-4490-a3a5-2cd938d5c92b req-c39d3d29-1594-4834-b2ce-b4f3eb97da3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:29 np0005539552 nova_compute[233724]: 2025-11-29 08:52:29.116 233728 DEBUG oslo_concurrency.lockutils [req-6a72fa49-0729-4490-a3a5-2cd938d5c92b req-c39d3d29-1594-4834-b2ce-b4f3eb97da3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:29 np0005539552 nova_compute[233724]: 2025-11-29 08:52:29.116 233728 DEBUG oslo_concurrency.lockutils [req-6a72fa49-0729-4490-a3a5-2cd938d5c92b req-c39d3d29-1594-4834-b2ce-b4f3eb97da3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:29 np0005539552 nova_compute[233724]: 2025-11-29 08:52:29.117 233728 DEBUG nova.compute.manager [req-6a72fa49-0729-4490-a3a5-2cd938d5c92b req-c39d3d29-1594-4834-b2ce-b4f3eb97da3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] No waiting events found dispatching network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:29 np0005539552 nova_compute[233724]: 2025-11-29 08:52:29.117 233728 WARNING nova.compute.manager [req-6a72fa49-0729-4490-a3a5-2cd938d5c92b req-c39d3d29-1594-4834-b2ce-b4f3eb97da3b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received unexpected event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:52:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:29.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:29.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:30 np0005539552 nova_compute[233724]: 2025-11-29 08:52:30.662 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:31.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:31.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:31 np0005539552 nova_compute[233724]: 2025-11-29 08:52:31.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:31 np0005539552 NetworkManager[48926]: <info>  [1764406351.9564] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Nov 29 03:52:31 np0005539552 NetworkManager[48926]: <info>  [1764406351.9572] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.100 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:32 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:32Z|00968|binding|INFO|Releasing lport 66a16424-da27-4199-8a91-be26bf876405 from this chassis (sb_readonly=0)
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.117 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.234 233728 DEBUG nova.compute.manager [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.234 233728 DEBUG nova.compute.manager [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing instance network info cache due to event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.235 233728 DEBUG oslo_concurrency.lockutils [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.235 233728 DEBUG oslo_concurrency.lockutils [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.235 233728 DEBUG nova.network.neutron [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:32 np0005539552 nova_compute[233724]: 2025-11-29 08:52:32.710 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:33.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.168 233728 DEBUG nova.network.neutron [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updated VIF entry in instance network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.169 233728 DEBUG nova.network.neutron [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.186 233728 DEBUG oslo_concurrency.lockutils [req-a27ed02c-136d-4c64-aefd-3436020b3653 req-52a2e4c6-3027-40c1-92fd-d0e3f8c6b4c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.368 233728 DEBUG nova.compute.manager [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.368 233728 DEBUG nova.compute.manager [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing instance network info cache due to event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.369 233728 DEBUG oslo_concurrency.lockutils [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.369 233728 DEBUG oslo_concurrency.lockutils [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:34 np0005539552 nova_compute[233724]: 2025-11-29 08:52:34.369 233728 DEBUG nova.network.neutron [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.666 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:35.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.756 233728 DEBUG nova.network.neutron [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updated VIF entry in instance network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.756 233728 DEBUG nova.network.neutron [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.770 233728 DEBUG oslo_concurrency.lockutils [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.770 233728 DEBUG nova.compute.manager [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.770 233728 DEBUG nova.compute.manager [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing instance network info cache due to event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.770 233728 DEBUG oslo_concurrency.lockutils [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.770 233728 DEBUG oslo_concurrency.lockutils [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:35 np0005539552 nova_compute[233724]: 2025-11-29 08:52:35.771 233728 DEBUG nova.network.neutron [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:36 np0005539552 nova_compute[233724]: 2025-11-29 08:52:36.930 233728 DEBUG nova.network.neutron [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updated VIF entry in instance network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:36 np0005539552 nova_compute[233724]: 2025-11-29 08:52:36.931 233728 DEBUG nova.network.neutron [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:36 np0005539552 nova_compute[233724]: 2025-11-29 08:52:36.955 233728 DEBUG oslo_concurrency.lockutils [req-9c39efd2-40ee-4131-8c5d-3feffa256249 req-cdc7129b-acac-4d43-8d3b-9abd4f5f2a37 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:37.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:37 np0005539552 nova_compute[233724]: 2025-11-29 08:52:37.712 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:52:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/443804662' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:52:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:52:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/443804662' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:52:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:40 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:40Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:0f:c4 10.100.0.4
Nov 29 03:52:40 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:40Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:0f:c4 10.100.0.4
Nov 29 03:52:40 np0005539552 nova_compute[233724]: 2025-11-29 08:52:40.670 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:41.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:42 np0005539552 nova_compute[233724]: 2025-11-29 08:52:42.716 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:45 np0005539552 nova_compute[233724]: 2025-11-29 08:52:45.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:45.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.230 233728 DEBUG nova.compute.manager [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.231 233728 DEBUG nova.compute.manager [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing instance network info cache due to event network-changed-ba9de23c-de81-495d-839f-d8ccb6604f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.231 233728 DEBUG oslo_concurrency.lockutils [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.232 233728 DEBUG oslo_concurrency.lockutils [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.232 233728 DEBUG nova.network.neutron [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Refreshing network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.297 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.298 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.298 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.298 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.299 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.301 233728 INFO nova.compute.manager [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Terminating instance#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.303 233728 DEBUG nova.compute.manager [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:52:47 np0005539552 kernel: tapba9de23c-de (unregistering): left promiscuous mode
Nov 29 03:52:47 np0005539552 NetworkManager[48926]: <info>  [1764406367.3665] device (tapba9de23c-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:52:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:47Z|00969|binding|INFO|Releasing lport ba9de23c-de81-495d-839f-d8ccb6604f76 from this chassis (sb_readonly=0)
Nov 29 03:52:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:47Z|00970|binding|INFO|Setting lport ba9de23c-de81-495d-839f-d8ccb6604f76 down in Southbound
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.371 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:52:47Z|00971|binding|INFO|Removing iface tapba9de23c-de ovn-installed in OVS
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.375 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.381 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:0f:c4 10.100.0.4'], port_security=['fa:16:3e:c8:0f:c4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '16db2bb6-d5b2-4f00-8d36-e1ce814cd722', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '676f70280bf945cd90578b14202243e2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d0274c4-ca7d-485b-8cef-af2050593c6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7abd7f17-e2fc-4653-8b0a-6623396c53a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>], logical_port=ba9de23c-de81-495d-839f-d8ccb6604f76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fddc701a6d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.384 143400 INFO neutron.agent.ovn.metadata.agent [-] Port ba9de23c-de81-495d-839f-d8ccb6604f76 in datapath 66c5f903-8626-4025-8a8f-4e0589b5aac5 unbound from our chassis#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.387 143400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66c5f903-8626-4025-8a8f-4e0589b5aac5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.389 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[328f41e2-dc5b-4708-a62a-96b238e2ef2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.390 143400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5 namespace which is not needed anymore#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.413 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Nov 29 03:52:47 np0005539552 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000e0.scope: Consumed 14.027s CPU time.
Nov 29 03:52:47 np0005539552 systemd-machined[196379]: Machine qemu-97-instance-000000e0 terminated.
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.528 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.536 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.545 233728 INFO nova.virt.libvirt.driver [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Instance destroyed successfully.#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.546 233728 DEBUG nova.objects.instance [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lazy-loading 'resources' on Instance uuid 16db2bb6-d5b2-4f00-8d36-e1ce814cd722 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:47 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [NOTICE]   (326472) : haproxy version is 2.8.14-c23fe91
Nov 29 03:52:47 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [NOTICE]   (326472) : path to executable is /usr/sbin/haproxy
Nov 29 03:52:47 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [WARNING]  (326472) : Exiting Master process...
Nov 29 03:52:47 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [ALERT]    (326472) : Current worker (326474) exited with code 143 (Terminated)
Nov 29 03:52:47 np0005539552 neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5[326468]: [WARNING]  (326472) : All workers exited. Exiting... (0)
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.581 233728 DEBUG nova.virt.libvirt.vif [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:52:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1124550516',display_name='tempest-TestVolumeBackupRestore-server-1124550516',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1124550516',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM2GphRUms02m5j0ijPtPJ6dSAhJBYmhnLZQ34iieerlpfPtQ5Ielz3rGVcqEiO64fmIWRizL2oz7oz/HOrDliMGUWSt6M5a2mbVISLPL0UKAsnk/sS4qQNWCAUsKUE9Lw==',key_name='tempest-TestVolumeBackupRestore-1353448143',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:52:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='676f70280bf945cd90578b14202243e2',ramdisk_id='',reservation_id='r-tt9a7f24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-35370002',owner_user_name='tempest-TestVolumeBackupRestore-35370002-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:52:27Z,user_data=None,user_id='c3778685080b4955bd80ff7056a1c9f2',uuid=16db2bb6-d5b2-4f00-8d36-e1ce814cd722,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.582 233728 DEBUG nova.network.os_vif_util [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Converting VIF {"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.583 233728 DEBUG nova.network.os_vif_util [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.583 233728 DEBUG os_vif [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:52:47 np0005539552 systemd[1]: libpod-5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e.scope: Deactivated successfully.
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.585 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.586 233728 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba9de23c-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.587 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.590 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 podman[326573]: 2025-11-29 08:52:47.592148755 +0000 UTC m=+0.063607523 container died 5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.594 233728 INFO os_vif [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:0f:c4,bridge_name='br-int',has_traffic_filtering=True,id=ba9de23c-de81-495d-839f-d8ccb6604f76,network=Network(66c5f903-8626-4025-8a8f-4e0589b5aac5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba9de23c-de')#033[00m
Nov 29 03:52:47 np0005539552 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:52:47 np0005539552 systemd[1]: var-lib-containers-storage-overlay-fccaf5d3ff08d6677d9c9810f173387b1a1a23a51727c84f3e0ab617f01d5457-merged.mount: Deactivated successfully.
Nov 29 03:52:47 np0005539552 podman[326573]: 2025-11-29 08:52:47.642947472 +0000 UTC m=+0.114406240 container cleanup 5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:52:47 np0005539552 systemd[1]: libpod-conmon-5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e.scope: Deactivated successfully.
Nov 29 03:52:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:47.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:47 np0005539552 podman[326628]: 2025-11-29 08:52:47.713236864 +0000 UTC m=+0.045012712 container remove 5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.717 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.721 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[428f5b64-0f6b-4c1b-9c3c-88d3cd660b74]: (4, ('Sat Nov 29 08:52:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5 (5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e)\n5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e\nSat Nov 29 08:52:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5 (5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e)\n5cb3202d54d5d9e02fbc011d9640ace817e21e247b745fff3c0c3ea5ffeec16e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.723 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[26663095-a743-44bf-8971-c097357799cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.723 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66c5f903-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:47 np0005539552 kernel: tap66c5f903-80: left promiscuous mode
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.727 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.729 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1a380d5f-b9ae-4c5f-8ef9-44da61c9b4b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.741 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:47.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.751 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[7f56f421-c949-4941-a577-4d19d4258bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.752 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[1a976786-b485-4c6d-af44-0211d9b59121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.769 239589 DEBUG oslo.privsep.daemon [-] privsep: reply[f38e545d-987f-42b9-ae0f-a193cc91e57c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 939250, 'reachable_time': 38178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326645, 'error': None, 'target': 'ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.771 143510 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66c5f903-8626-4025-8a8f-4e0589b5aac5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:52:47 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:52:47.771 143510 DEBUG oslo.privsep.daemon [-] privsep: reply[751dba4c-2ae1-49dd-9f2e-532816342870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:47 np0005539552 systemd[1]: run-netns-ovnmeta\x2d66c5f903\x2d8626\x2d4025\x2d8a8f\x2d4e0589b5aac5.mount: Deactivated successfully.
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.819 233728 INFO nova.virt.libvirt.driver [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Deleting instance files /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722_del#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.820 233728 INFO nova.virt.libvirt.driver [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Deletion of /var/lib/nova/instances/16db2bb6-d5b2-4f00-8d36-e1ce814cd722_del complete#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.878 233728 INFO nova.compute.manager [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.879 233728 DEBUG oslo.service.loopingcall [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.879 233728 DEBUG nova.compute.manager [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.879 233728 DEBUG nova.network.neutron [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.895 233728 DEBUG nova.compute.manager [req-68af8905-58af-4aaf-b9ac-72cf5f2ae65e req-b46a5f14-3ade-4a7d-8e5c-9b35a9104fe9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-vif-unplugged-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.896 233728 DEBUG oslo_concurrency.lockutils [req-68af8905-58af-4aaf-b9ac-72cf5f2ae65e req-b46a5f14-3ade-4a7d-8e5c-9b35a9104fe9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.896 233728 DEBUG oslo_concurrency.lockutils [req-68af8905-58af-4aaf-b9ac-72cf5f2ae65e req-b46a5f14-3ade-4a7d-8e5c-9b35a9104fe9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.897 233728 DEBUG oslo_concurrency.lockutils [req-68af8905-58af-4aaf-b9ac-72cf5f2ae65e req-b46a5f14-3ade-4a7d-8e5c-9b35a9104fe9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.897 233728 DEBUG nova.compute.manager [req-68af8905-58af-4aaf-b9ac-72cf5f2ae65e req-b46a5f14-3ade-4a7d-8e5c-9b35a9104fe9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] No waiting events found dispatching network-vif-unplugged-ba9de23c-de81-495d-839f-d8ccb6604f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:47 np0005539552 nova_compute[233724]: 2025-11-29 08:52:47.897 233728 DEBUG nova.compute.manager [req-68af8905-58af-4aaf-b9ac-72cf5f2ae65e req-b46a5f14-3ade-4a7d-8e5c-9b35a9104fe9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-vif-unplugged-ba9de23c-de81-495d-839f-d8ccb6604f76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:52:48 np0005539552 nova_compute[233724]: 2025-11-29 08:52:48.805 233728 DEBUG nova.network.neutron [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:48 np0005539552 nova_compute[233724]: 2025-11-29 08:52:48.831 233728 INFO nova.compute.manager [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Took 0.95 seconds to deallocate network for instance.#033[00m
Nov 29 03:52:48 np0005539552 nova_compute[233724]: 2025-11-29 08:52:48.912 233728 DEBUG nova.compute.manager [req-98790f14-47ec-45cd-a077-4471429e7b78 req-90899062-f083-4fb8-a85a-d9646c3a1157 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-vif-deleted-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:48 np0005539552 nova_compute[233724]: 2025-11-29 08:52:48.947 233728 DEBUG nova.network.neutron [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updated VIF entry in instance network info cache for port ba9de23c-de81-495d-839f-d8ccb6604f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:48 np0005539552 nova_compute[233724]: 2025-11-29 08:52:48.948 233728 DEBUG nova.network.neutron [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Updating instance_info_cache with network_info: [{"id": "ba9de23c-de81-495d-839f-d8ccb6604f76", "address": "fa:16:3e:c8:0f:c4", "network": {"id": "66c5f903-8626-4025-8a8f-4e0589b5aac5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1405316817-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676f70280bf945cd90578b14202243e2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba9de23c-de", "ovs_interfaceid": "ba9de23c-de81-495d-839f-d8ccb6604f76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:48 np0005539552 nova_compute[233724]: 2025-11-29 08:52:48.960 233728 DEBUG oslo_concurrency.lockutils [req-f38db5ba-5368-43d7-909d-48790a435f80 req-63ec6072-28e9-4f9e-87ab-36be4f6ef4d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-16db2bb6-d5b2-4f00-8d36-e1ce814cd722" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.008 233728 INFO nova.compute.manager [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.046 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.046 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.226 233728 DEBUG oslo_concurrency.processutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2233152570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.709 233728 DEBUG oslo_concurrency.processutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.718 233728 DEBUG nova.compute.provider_tree [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.745 233728 DEBUG nova.scheduler.client.report [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:49.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.772 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.903 233728 INFO nova.scheduler.client.report [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Deleted allocations for instance 16db2bb6-d5b2-4f00-8d36-e1ce814cd722#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.971 233728 DEBUG oslo_concurrency.lockutils [None req-37b9310b-9474-47f1-8f1e-1e92c154cea9 c3778685080b4955bd80ff7056a1c9f2 676f70280bf945cd90578b14202243e2 - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.995 233728 DEBUG nova.compute.manager [req-a2351bbe-4af1-40fb-8a3b-82677e47a70b req-95e3ec60-4281-4328-b895-593609b89fed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.996 233728 DEBUG oslo_concurrency.lockutils [req-a2351bbe-4af1-40fb-8a3b-82677e47a70b req-95e3ec60-4281-4328-b895-593609b89fed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.996 233728 DEBUG oslo_concurrency.lockutils [req-a2351bbe-4af1-40fb-8a3b-82677e47a70b req-95e3ec60-4281-4328-b895-593609b89fed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.997 233728 DEBUG oslo_concurrency.lockutils [req-a2351bbe-4af1-40fb-8a3b-82677e47a70b req-95e3ec60-4281-4328-b895-593609b89fed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "16db2bb6-d5b2-4f00-8d36-e1ce814cd722-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.997 233728 DEBUG nova.compute.manager [req-a2351bbe-4af1-40fb-8a3b-82677e47a70b req-95e3ec60-4281-4328-b895-593609b89fed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] No waiting events found dispatching network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:49 np0005539552 nova_compute[233724]: 2025-11-29 08:52:49.998 233728 WARNING nova.compute.manager [req-a2351bbe-4af1-40fb-8a3b-82677e47a70b req-95e3ec60-4281-4328-b895-593609b89fed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Received unexpected event network-vif-plugged-ba9de23c-de81-495d-839f-d8ccb6604f76 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:52:50 np0005539552 nova_compute[233724]: 2025-11-29 08:52:50.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:50 np0005539552 nova_compute[233724]: 2025-11-29 08:52:50.952 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:50 np0005539552 nova_compute[233724]: 2025-11-29 08:52:50.952 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:50 np0005539552 nova_compute[233724]: 2025-11-29 08:52:50.953 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:50 np0005539552 nova_compute[233724]: 2025-11-29 08:52:50.953 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:52:50 np0005539552 nova_compute[233724]: 2025-11-29 08:52:50.953 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1868105818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.458 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:51.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.752 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.753 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4102MB free_disk=20.98798370361328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.754 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.754 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.874 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.874 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:52:51 np0005539552 nova_compute[233724]: 2025-11-29 08:52:51.896 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2326875002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.395 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.401 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.424 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.447 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.448 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.588 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:52 np0005539552 nova_compute[233724]: 2025-11-29 08:52:52.721 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259074578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:52:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259074578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:52:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:53.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 03:52:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:55 np0005539552 podman[326718]: 2025-11-29 08:52:55.015672937 +0000 UTC m=+0.098083061 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd)
Nov 29 03:52:55 np0005539552 podman[326719]: 2025-11-29 08:52:55.021074613 +0000 UTC m=+0.086530091 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:52:55 np0005539552 podman[326726]: 2025-11-29 08:52:55.023157059 +0000 UTC m=+0.090791815 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:52:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:55.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:57 np0005539552 nova_compute[233724]: 2025-11-29 08:52:57.591 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:57.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:57 np0005539552 nova_compute[233724]: 2025-11-29 08:52:57.723 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:58 np0005539552 nova_compute[233724]: 2025-11-29 08:52:58.449 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:58 np0005539552 nova_compute[233724]: 2025-11-29 08:52:58.449 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:58 np0005539552 nova_compute[233724]: 2025-11-29 08:52:58.450 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:52:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:59.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:52:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:59.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:00 np0005539552 nova_compute[233724]: 2025-11-29 08:53:00.045 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539552 nova_compute[233724]: 2025-11-29 08:53:00.278 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539552 nova_compute[233724]: 2025-11-29 08:53:00.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:53:00.815 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:00 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:53:00.816 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:53:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 03:53:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:01.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:01 np0005539552 nova_compute[233724]: 2025-11-29 08:53:01.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:01.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:02 np0005539552 nova_compute[233724]: 2025-11-29 08:53:02.543 233728 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406367.5423255, 16db2bb6-d5b2-4f00-8d36-e1ce814cd722 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:53:02 np0005539552 nova_compute[233724]: 2025-11-29 08:53:02.544 233728 INFO nova.compute.manager [-] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:53:02 np0005539552 nova_compute[233724]: 2025-11-29 08:53:02.569 233728 DEBUG nova.compute.manager [None req-9703d2f0-e378-4cd8-aa7e-6cd0ad203626 - - - - - -] [instance: 16db2bb6-d5b2-4f00-8d36-e1ce814cd722] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:02 np0005539552 nova_compute[233724]: 2025-11-29 08:53:02.596 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:02 np0005539552 nova_compute[233724]: 2025-11-29 08:53:02.726 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:03 np0005539552 nova_compute[233724]: 2025-11-29 08:53:03.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:03.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:05 np0005539552 nova_compute[233724]: 2025-11-29 08:53:05.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:05.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:06 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:53:06.818 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:06 np0005539552 nova_compute[233724]: 2025-11-29 08:53:06.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:07 np0005539552 nova_compute[233724]: 2025-11-29 08:53:07.599 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:07.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:07 np0005539552 nova_compute[233724]: 2025-11-29 08:53:07.728 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:07.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:09.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:09.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:10 np0005539552 nova_compute[233724]: 2025-11-29 08:53:10.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:10 np0005539552 nova_compute[233724]: 2025-11-29 08:53:10.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:53:10 np0005539552 nova_compute[233724]: 2025-11-29 08:53:10.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:53:10 np0005539552 nova_compute[233724]: 2025-11-29 08:53:10.951 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:53:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:11.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:11 np0005539552 nova_compute[233724]: 2025-11-29 08:53:11.946 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:11.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:12 np0005539552 nova_compute[233724]: 2025-11-29 08:53:12.602 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539552 nova_compute[233724]: 2025-11-29 08:53:12.730 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:13.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:17 np0005539552 nova_compute[233724]: 2025-11-29 08:53:17.605 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539552 nova_compute[233724]: 2025-11-29 08:53:17.733 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:17.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:19.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:53:20.662 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:53:20.662 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:53:20.662 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:21.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:21 np0005539552 nova_compute[233724]: 2025-11-29 08:53:21.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:21.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:22 np0005539552 nova_compute[233724]: 2025-11-29 08:53:22.608 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:22 np0005539552 nova_compute[233724]: 2025-11-29 08:53:22.735 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:23.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:25.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:26 np0005539552 podman[326908]: 2025-11-29 08:53:26.014599229 +0000 UTC m=+0.088522684 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:53:26 np0005539552 podman[326907]: 2025-11-29 08:53:26.014934058 +0000 UTC m=+0.090774635 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 03:53:26 np0005539552 podman[326909]: 2025-11-29 08:53:26.034492874 +0000 UTC m=+0.111279316 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller)
Nov 29 03:53:27 np0005539552 nova_compute[233724]: 2025-11-29 08:53:27.610 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:27 np0005539552 nova_compute[233724]: 2025-11-29 08:53:27.737 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:27.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:29.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:29.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:31.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:32 np0005539552 nova_compute[233724]: 2025-11-29 08:53:32.613 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:32 np0005539552 nova_compute[233724]: 2025-11-29 08:53:32.739 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:33.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:35.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:37 np0005539552 nova_compute[233724]: 2025-11-29 08:53:37.616 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:37 np0005539552 nova_compute[233724]: 2025-11-29 08:53:37.741 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:37.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:37.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:53:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/541695828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:53:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:53:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/541695828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:53:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:39.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:42 np0005539552 nova_compute[233724]: 2025-11-29 08:53:42.620 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:42 np0005539552 nova_compute[233724]: 2025-11-29 08:53:42.743 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:43.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:44.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:45.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:47 np0005539552 nova_compute[233724]: 2025-11-29 08:53:47.622 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:47 np0005539552 nova_compute[233724]: 2025-11-29 08:53:47.745 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:47 np0005539552 ovn_controller[133798]: 2025-11-29T08:53:47Z|00972|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 03:53:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:47.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:48.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:49.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:51.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:51 np0005539552 nova_compute[233724]: 2025-11-29 08:53:51.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:51 np0005539552 nova_compute[233724]: 2025-11-29 08:53:51.963 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:51 np0005539552 nova_compute[233724]: 2025-11-29 08:53:51.964 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:51 np0005539552 nova_compute[233724]: 2025-11-29 08:53:51.964 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:51 np0005539552 nova_compute[233724]: 2025-11-29 08:53:51.965 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:53:51 np0005539552 nova_compute[233724]: 2025-11-29 08:53:51.965 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1023740060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.444 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.624 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.631 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.632 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4106MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.632 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.633 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:52 np0005539552 nova_compute[233724]: 2025-11-29 08:53:52.748 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:53 np0005539552 nova_compute[233724]: 2025-11-29 08:53:53.559 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:53:53 np0005539552 nova_compute[233724]: 2025-11-29 08:53:53.561 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:53:53 np0005539552 nova_compute[233724]: 2025-11-29 08:53:53.581 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2518460647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:54 np0005539552 nova_compute[233724]: 2025-11-29 08:53:54.058 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:54 np0005539552 nova_compute[233724]: 2025-11-29 08:53:54.064 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:54 np0005539552 nova_compute[233724]: 2025-11-29 08:53:54.092 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:54 np0005539552 nova_compute[233724]: 2025-11-29 08:53:54.114 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:53:54 np0005539552 nova_compute[233724]: 2025-11-29 08:53:54.115 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:55.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:56.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:57 np0005539552 podman[327314]: 2025-11-29 08:53:57.011110164 +0000 UTC m=+0.082427510 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 29 03:53:57 np0005539552 podman[327313]: 2025-11-29 08:53:57.053535396 +0000 UTC m=+0.129718533 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:53:57 np0005539552 podman[327315]: 2025-11-29 08:53:57.069813524 +0000 UTC m=+0.136904356 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:53:57 np0005539552 nova_compute[233724]: 2025-11-29 08:53:57.627 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:57 np0005539552 nova_compute[233724]: 2025-11-29 08:53:57.750 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:57.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:58.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:53:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:01 np0005539552 nova_compute[233724]: 2025-11-29 08:54:01.115 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:01 np0005539552 nova_compute[233724]: 2025-11-29 08:54:01.116 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:01 np0005539552 nova_compute[233724]: 2025-11-29 08:54:01.116 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:54:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:01.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:02 np0005539552 nova_compute[233724]: 2025-11-29 08:54:02.630 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:02 np0005539552 nova_compute[233724]: 2025-11-29 08:54:02.752 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:02 np0005539552 nova_compute[233724]: 2025-11-29 08:54:02.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:03.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:05.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:05 np0005539552 nova_compute[233724]: 2025-11-29 08:54:05.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:07.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:07 np0005539552 nova_compute[233724]: 2025-11-29 08:54:07.633 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:07 np0005539552 nova_compute[233724]: 2025-11-29 08:54:07.754 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:07 np0005539552 nova_compute[233724]: 2025-11-29 08:54:07.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:07 np0005539552 nova_compute[233724]: 2025-11-29 08:54:07.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:09.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:11.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:11 np0005539552 nova_compute[233724]: 2025-11-29 08:54:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:11 np0005539552 nova_compute[233724]: 2025-11-29 08:54:11.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:54:11 np0005539552 nova_compute[233724]: 2025-11-29 08:54:11.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:54:12 np0005539552 nova_compute[233724]: 2025-11-29 08:54:12.019 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:54:12 np0005539552 nova_compute[233724]: 2025-11-29 08:54:12.636 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:12 np0005539552 nova_compute[233724]: 2025-11-29 08:54:12.755 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:13.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:14 np0005539552 nova_compute[233724]: 2025-11-29 08:54:14.013 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:14 np0005539552 systemd-logind[788]: New session 71 of user zuul.
Nov 29 03:54:14 np0005539552 systemd[1]: Started Session 71 of User zuul.
Nov 29 03:54:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:15.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:15.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:17.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:17 np0005539552 nova_compute[233724]: 2025-11-29 08:54:17.639 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:17 np0005539552 nova_compute[233724]: 2025-11-29 08:54:17.757 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:18 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 03:54:18 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3347350086' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 03:54:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:54:20.663 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:54:20.664 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:54:20.664 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:21.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:22 np0005539552 ovs-vsctl[327728]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 03:54:22 np0005539552 nova_compute[233724]: 2025-11-29 08:54:22.641 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:22 np0005539552 nova_compute[233724]: 2025-11-29 08:54:22.759 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:22 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 03:54:23 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 03:54:23 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 03:54:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:23 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: cache status {prefix=cache status} (starting...)
Nov 29 03:54:23 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:23 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: client ls {prefix=client ls} (starting...)
Nov 29 03:54:23 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:23.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:23 np0005539552 lvm[328075]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 03:54:23 np0005539552 lvm[328075]: VG ceph_vg0 finished
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 03:54:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2379448411' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:24 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2755794948' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 03:54:24 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 03:54:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/821656351' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: ops {prefix=ops} (starting...)
Nov 29 03:54:25 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 03:54:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3083253098' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 03:54:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 03:54:25 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2560809309' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 03:54:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2469990892' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.198604) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466198691, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 2459, "num_deletes": 254, "total_data_size": 5902651, "memory_usage": 6001200, "flush_reason": "Manual Compaction"}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466223173, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 3872042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73663, "largest_seqno": 76117, "table_properties": {"data_size": 3861851, "index_size": 6492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21680, "raw_average_key_size": 20, "raw_value_size": 3841300, "raw_average_value_size": 3707, "num_data_blocks": 282, "num_entries": 1036, "num_filter_entries": 1036, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406252, "oldest_key_time": 1764406252, "file_creation_time": 1764406466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 24592 microseconds, and 6110 cpu microseconds.
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.223220) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 3872042 bytes OK
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.223236) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.224908) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.224919) EVENT_LOG_v1 {"time_micros": 1764406466224916, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.224933) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 5891700, prev total WAL file size 5891700, number of live WAL files 2.
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.226069) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(3781KB)], [150(11MB)]
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466226144, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 16358899, "oldest_snapshot_seqno": -1}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 10702 keys, 14433523 bytes, temperature: kUnknown
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466351396, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14433523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14363266, "index_size": 42368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26821, "raw_key_size": 281874, "raw_average_key_size": 26, "raw_value_size": 14174378, "raw_average_value_size": 1324, "num_data_blocks": 1616, "num_entries": 10702, "num_filter_entries": 10702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.351609) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14433523 bytes
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.352900) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.5 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 11227, records dropped: 525 output_compression: NoCompression
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.352914) EVENT_LOG_v1 {"time_micros": 1764406466352907, "job": 96, "event": "compaction_finished", "compaction_time_micros": 125308, "compaction_time_cpu_micros": 42960, "output_level": 6, "num_output_files": 1, "total_output_size": 14433523, "num_input_records": 11227, "num_output_records": 10702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466353569, "job": 96, "event": "table_file_deletion", "file_number": 152}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466355651, "job": 96, "event": "table_file_deletion", "file_number": 150}
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.225967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.355709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.355715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.355717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.355718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:54:26.355720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: session ls {prefix=session ls} (starting...)
Nov 29 03:54:26 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 03:54:26 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: status {prefix=status} (starting...)
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2269703779' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2285971093' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 03:54:26 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/444400518' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 03:54:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:27.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 03:54:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2046714407' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 03:54:27 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 03:54:27 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/492556511' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 03:54:27 np0005539552 nova_compute[233724]: 2025-11-29 08:54:27.644 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:27 np0005539552 nova_compute[233724]: 2025-11-29 08:54:27.761 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:27.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:27 np0005539552 podman[328610]: 2025-11-29 08:54:27.860456152 +0000 UTC m=+0.065857544 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:54:27 np0005539552 podman[328605]: 2025-11-29 08:54:27.872217038 +0000 UTC m=+0.075992246 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:54:27 np0005539552 podman[328611]: 2025-11-29 08:54:27.891369244 +0000 UTC m=+0.093723304 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2693086653' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/554335364' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2086354131' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 03:54:28 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/65089122' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 03:54:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:29.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 03:54:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/277496061' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 03:54:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 03:54:29 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/534659552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 03:54:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:29.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435937280 unmapped: 77357056 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19dc78000/0x0/0x1bfc00000, data 0x61cebb6/0x63a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbaee5bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbaf799e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71fc00 session 0x55cbaf54c3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbbb89000 session 0x55cbafb82d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbbb89000 session 0x55cbacf723c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbade17000 session 0x55cbadd8e000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 77422592 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71fc00 session 0x55cbafb2ed20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbade17000 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbafb90b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbbb89000 session 0x55cbafb0d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5076107 data_alloc: 251658240 data_used: 35995648
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435888128 unmapped: 77406208 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19d2ad000/0x0/0x1bfc00000, data 0x6b96c28/0x6d70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435896320 unmapped: 77398016 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb6511000 session 0x55cbacdbde00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb1691400 session 0x55cbafb90d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435904512 unmapped: 77389824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19d2ad000/0x0/0x1bfc00000, data 0x6b96c28/0x6d70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbadb085a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacf2be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb6511000 session 0x55cbaf40cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435904512 unmapped: 77389824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbbb89000 session 0x55cbacc35e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.252713203s of 10.648158073s, submitted: 121
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbbadf4f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbbb89000 session 0x55cbaf7981e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbafb2e3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb1691400 session 0x55cbaf40c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb6511000 session 0x55cbaee65680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435920896 unmapped: 77373440 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb6511000 session 0x55cbaee65a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbad92c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5115627 data_alloc: 251658240 data_used: 41590784
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbafaf03c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 435920896 unmapped: 77373440 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb1691400 session 0x55cbafb0da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbbb89000 session 0x55cbaf40c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71c800 session 0x55cbafb2f860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 436240384 unmapped: 77053952 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaeee5400 session 0x55cbbadf50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb3952800 session 0x55cbacdc1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbe338c00 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbadeb3400 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaeee5400 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71c800 session 0x55cbaee803c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb3952800 session 0x55cbacf732c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19d260000/0x0/0x1bfc00000, data 0x6bded05/0x6dbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbe338c00 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf1b1000 session 0x55cbaf5a41e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 436469760 unmapped: 76824576 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438484992 unmapped: 74809344 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbacf2a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbb0b43c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb1691400 session 0x55cbacdbd680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438484992 unmapped: 74809344 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19c270000/0x0/0x1bfc00000, data 0x77bed3e/0x799e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5347697 data_alloc: 268435456 data_used: 54755328
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438550528 unmapped: 74743808 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb3952800 session 0x55cbaf54d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19c297000/0x0/0x1bfc00000, data 0x779aca9/0x7977000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438607872 unmapped: 74686464 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438607872 unmapped: 74686464 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbe338c00 session 0x55cbaf5a4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbacf1c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacffb2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb1691400 session 0x55cbaf5a43c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb3952800 session 0x55cbafb0c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbbe338c00 session 0x55cbacc35e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaeee5400 session 0x55cbad03b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71c800 session 0x55cbaf5a4b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 440082432 unmapped: 73211904 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbad92c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.419441223s of 10.017598152s, submitted: 181
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbb1691400 session 0x55cbadd8e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71c800 session 0x55cbad03bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaeee5400 session 0x55cbafb803c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 heartbeat osd_stat(store_statfs(0x19aa7e000/0x0/0x1bfc00000, data 0x8fb2c96/0x918f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 442990592 unmapped: 70303744 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5559045 data_alloc: 268435456 data_used: 56418304
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443949056 unmapped: 69345280 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 451985408 unmapped: 61308928 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbad92cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 ms_handle_reset con 0x55cbaf71f000 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 450920448 unmapped: 62373888 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbb0672c00 session 0x55cbaf5a5a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaeee5400 session 0x55cbad92d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbb3952800 session 0x55cbaee5b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbad0305a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 448741376 unmapped: 64552960 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaf71c800 session 0x55cbafb83a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaf71f000 session 0x55cbac6112c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbacfde5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaeee5400 session 0x55cbaeb50b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 447168512 unmapped: 66125824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 heartbeat osd_stat(store_statfs(0x19d007000/0x0/0x1bfc00000, data 0x7a668c8/0x7c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbade17000 session 0x55cbb0b434a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaeee6400 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5330017 data_alloc: 268435456 data_used: 46698496
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 448176128 unmapped: 65118208 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbb3952800 session 0x55cbb0b43c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444088320 unmapped: 69206016 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbad90fc00 session 0x55cbadab3a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbaf71d800 session 0x55cbad892000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444088320 unmapped: 69206016 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 ms_handle_reset con 0x55cbb3952800 session 0x55cbaf54c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444096512 unmapped: 69197824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444096512 unmapped: 69197824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.651811600s of 11.478851318s, submitted: 343
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5200174 data_alloc: 268435456 data_used: 50016256
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444096512 unmapped: 69197824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 heartbeat osd_stat(store_statfs(0x19e326000/0x0/0x1bfc00000, data 0x674f895/0x6926000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444096512 unmapped: 69197824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bc000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbbadf54a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 ms_handle_reset con 0x55cbb6a92400 session 0x55cbaf54c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444096512 unmapped: 69197824 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 ms_handle_reset con 0x55cbb6511000 session 0x55cbacfde000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 ms_handle_reset con 0x55cbade12000 session 0x55cbafb905a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444563456 unmapped: 68730880 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 ms_handle_reset con 0x55cbade17000 session 0x55cbb2c1ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbaf71d800 session 0x55cbaf7bcf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbade12000 session 0x55cbafb0cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbade17000 session 0x55cbad92d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbaf71d800 session 0x55cbadd8e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbad90fc00 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444686336 unmapped: 68608000 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4939170 data_alloc: 251658240 data_used: 39583744
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444686336 unmapped: 68608000 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444686336 unmapped: 68608000 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 heartbeat osd_stat(store_statfs(0x19f6c2000/0x0/0x1bfc00000, data 0x4ed8231/0x50b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbb6511000 session 0x55cbacf64b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444686336 unmapped: 68608000 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 ms_handle_reset con 0x55cbb6511000 session 0x55cbafb80000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 448118784 unmapped: 65175552 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 448397312 unmapped: 64897024 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbaee81680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.825048447s of 10.257137299s, submitted: 194
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f336000/0x0/0x1bfc00000, data 0x5735e3a/0x590f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbac611680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5009194 data_alloc: 251658240 data_used: 39628800
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 448405504 unmapped: 64888832 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443129856 unmapped: 70164480 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443146240 unmapped: 70148096 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443990016 unmapped: 69304320 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443949056 unmapped: 69345280 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067875 data_alloc: 251658240 data_used: 45498368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443949056 unmapped: 69345280 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb3952800 session 0x55cbacdc01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f242000/0x0/0x1bfc00000, data 0x5829e5d/0x5a04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbad03b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f242000/0x0/0x1bfc00000, data 0x5829e5d/0x5a04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443957248 unmapped: 69337088 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f24a000/0x0/0x1bfc00000, data 0x5829e5d/0x5a04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997927 data_alloc: 251658240 data_used: 42045440
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.960361481s of 11.228503227s, submitted: 93
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f6c1000/0x0/0x1bfc00000, data 0x53b3dfb/0x558d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443973632 unmapped: 69320704 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443990016 unmapped: 69304320 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb3952800 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004759 data_alloc: 251658240 data_used: 42700800
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443990016 unmapped: 69304320 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443990016 unmapped: 69304320 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f6e6000/0x0/0x1bfc00000, data 0x538fdd8/0x5568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5017879 data_alloc: 251658240 data_used: 44437504
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.301399231s of 10.361037254s, submitted: 28
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbadab2b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f6e4000/0x0/0x1bfc00000, data 0x538fe4a/0x556a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443998208 unmapped: 69296128 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f6e4000/0x0/0x1bfc00000, data 0x538fe4a/0x556a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d800 session 0x55cbafaf0b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92400 session 0x55cbac60f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbacf1d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438583296 unmapped: 74711040 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4802497 data_alloc: 251658240 data_used: 31567872
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438583296 unmapped: 74711040 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x1a05b3000/0x0/0x1bfc00000, data 0x4272dd8/0x444b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbaeb51680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438591488 unmapped: 74702848 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d800 session 0x55cbacf64960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438394880 unmapped: 74899456 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb3952800 session 0x55cbacdbd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71c800 session 0x55cbaf5a5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71f000 session 0x55cbafaf0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbafaf0b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbaf40d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438697984 unmapped: 74596352 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d800 session 0x55cbb2c1bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438697984 unmapped: 74596352 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb3952800 session 0x55cbbadf50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4827573 data_alloc: 251658240 data_used: 35254272
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x1a07de000/0x0/0x1bfc00000, data 0x4296dfb/0x4470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 74694656 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438599680 unmapped: 74694656 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.942741394s of 10.204481125s, submitted: 93
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438747136 unmapped: 74547200 heap: 513294336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d800 session 0x55cbac611680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacfde5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71dc00 session 0x55cbafb905a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbafb90b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438788096 unmapped: 78708736 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4927778 data_alloc: 251658240 data_used: 37027840
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19fcba000/0x0/0x1bfc00000, data 0x4db9dfb/0x4f93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19fcba000/0x0/0x1bfc00000, data 0x4db9dfb/0x4f93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4927778 data_alloc: 251658240 data_used: 37027840
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 78692352 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 442351616 unmapped: 75145216 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.665019989s of 10.125677109s, submitted: 170
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443383808 unmapped: 74113024 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f13d000/0x0/0x1bfc00000, data 0x5937dfb/0x5b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443604992 unmapped: 73891840 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443629568 unmapped: 73867264 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5035975 data_alloc: 251658240 data_used: 38207488
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 443580416 unmapped: 73916416 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19f0a6000/0x0/0x1bfc00000, data 0x59cedfb/0x5ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbaeb501e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444096512 unmapped: 73400320 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 444784640 unmapped: 72712192 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbacf2a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb0672c00 session 0x55cbad03bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbadb094a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbadd8fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbb2c1b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 446324736 unmapped: 71172096 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 446324736 unmapped: 71172096 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5232975 data_alloc: 268435456 data_used: 50884608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 446324736 unmapped: 71172096 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbaf40c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19e56d000/0x0/0x1bfc00000, data 0x6506e5d/0x66e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 446431232 unmapped: 71065600 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb0672c00 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbacc354a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848834038s of 10.313329697s, submitted: 143
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbaf798000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 447496192 unmapped: 70000640 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 447528960 unmapped: 69967872 heap: 517496832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19e524000/0x0/0x1bfc00000, data 0x654ee6d/0x672a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb0672c00 session 0x55cbb0b43e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbacdc0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbbbb86400 session 0x55cbb2c1b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 452599808 unmapped: 69099520 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5416466 data_alloc: 268435456 data_used: 60407808
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19d88c000/0x0/0x1bfc00000, data 0x71e5ecf/0x73c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 452599808 unmapped: 69099520 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19d88c000/0x0/0x1bfc00000, data 0x71e5ecf/0x73c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb3952800 session 0x55cbaf54de00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbacf2b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 452599808 unmapped: 69099520 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 452599808 unmapped: 69099520 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbacf42c00 session 0x55cbb0b423c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 452599808 unmapped: 69099520 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 454737920 unmapped: 66961408 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbb0b42f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5493692 data_alloc: 268435456 data_used: 60436480
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 456032256 unmapped: 65667072 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19cd6d000/0x0/0x1bfc00000, data 0x7cfeecf/0x7edb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbadd8f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbaee65860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 456155136 unmapped: 65544192 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.538712502s of 10.042143822s, submitted: 195
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbadb09c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 456212480 unmapped: 65486848 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 456368128 unmapped: 65331200 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 460251136 unmapped: 61448192 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5562664 data_alloc: 285212672 data_used: 68255744
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467419136 unmapped: 54280192 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467451904 unmapped: 54247424 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c860000/0x0/0x1bfc00000, data 0x7df7edf/0x7fd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade17000 session 0x55cbacfa3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbad031860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467492864 unmapped: 54206464 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbb2c1a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbaf5a5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467525632 unmapped: 54173696 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467525632 unmapped: 54173696 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511000 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5345917 data_alloc: 268435456 data_used: 63823872
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467525632 unmapped: 54173696 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19d0cc000/0x0/0x1bfc00000, data 0x6924ebf/0x6b00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467525632 unmapped: 54173696 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.581184387s of 10.059178352s, submitted: 215
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad90fc00 session 0x55cbadb09860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbafaf0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467525632 unmapped: 54173696 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbacc35e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19dd3e000/0x0/0x1bfc00000, data 0x6924ebf/0x6b00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467533824 unmapped: 54165504 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 467533824 unmapped: 54165504 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5454247 data_alloc: 268435456 data_used: 63881216
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 53477376 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 471138304 unmapped: 50561024 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbad030000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 471457792 unmapped: 50241536 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 470802432 unmapped: 50896896 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c6da000/0x0/0x1bfc00000, data 0x7f88ebf/0x8164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbad8932c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c6da000/0x0/0x1bfc00000, data 0x7f88ebf/0x8164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 470818816 unmapped: 50880512 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5522224 data_alloc: 285212672 data_used: 65458176
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 470818816 unmapped: 50880512 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbb0b434a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbade8af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 478453760 unmapped: 43245568 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbade8ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbacdbd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbaf54d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.366607666s of 10.063769341s, submitted: 225
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbaf798000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbaf40cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 479633408 unmapped: 42065920 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbafb0dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbacdbd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19db1f000/0x0/0x1bfc00000, data 0x6b3cea9/0x6d19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf05a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbaf5a4b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbacffb4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 42024960 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbadd6a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbad92c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 479682560 unmapped: 42016768 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5395217 data_alloc: 285212672 data_used: 67510272
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb5fe2400 session 0x55cbac60eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 479682560 unmapped: 42016768 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 479682560 unmapped: 42016768 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 40001536 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19db45000/0x0/0x1bfc00000, data 0x6b1bef2/0x6cf9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482869248 unmapped: 38830080 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 37683200 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5470197 data_alloc: 285212672 data_used: 74481664
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 37675008 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbade8ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 37675008 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.743839264s of 10.006694794s, submitted: 82
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 34201600 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c0ff000/0x0/0x1bfc00000, data 0x73b9ef2/0x7597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 34201600 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbb2c1ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486842368 unmapped: 34856960 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c107000/0x0/0x1bfc00000, data 0x73b9ef2/0x7597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5546927 data_alloc: 285212672 data_used: 75759616
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 34840576 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 34816000 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 34816000 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c102000/0x0/0x1bfc00000, data 0x73beef2/0x759c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,1] op hist [0,0,0,2,9,7])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495173632 unmapped: 26525696 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 492388352 unmapped: 29310976 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbade8af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb1690400 session 0x55cbb0b434a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb4ebd800 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbad030000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5690435 data_alloc: 285212672 data_used: 77111296
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 492388352 unmapped: 29310976 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb1690400 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 493666304 unmapped: 28033024 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeec3800 session 0x55cbaf5a5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbad031860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.940672874s of 10.130501747s, submitted: 192
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 493076480 unmapped: 28622848 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199343000/0x0/0x1bfc00000, data 0x8fd9f02/0x91b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 493076480 unmapped: 28622848 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 493076480 unmapped: 28622848 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbad893e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbacf2a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbb2c1a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5789851 data_alloc: 285212672 data_used: 77225984
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 493084672 unmapped: 28614656 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf1b0800 session 0x55cbacf1c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199346000/0x0/0x1bfc00000, data 0x8fd9f02/0x91b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbacfa3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483016704 unmapped: 38682624 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbadb09c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbb25385a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb1690400 session 0x55cbacdc1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbacdbd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbadab2f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483016704 unmapped: 38682624 heap: 521699328 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbacffa1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbac60f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb2c3dc00 session 0x55cbadd8f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbb0b423c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbacffa1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483049472 unmapped: 47046656 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacdc1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483057664 unmapped: 47038464 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19a5bc000/0x0/0x1bfc00000, data 0x7d62f02/0x7f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6aa0000 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5578740 data_alloc: 285212672 data_used: 66404352
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483057664 unmapped: 47038464 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf721400 session 0x55cbb25385a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19a5bc000/0x0/0x1bfc00000, data 0x7d62f02/0x7f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbacfa3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483065856 unmapped: 47030272 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483065856 unmapped: 47030272 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbafb91680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.009448051s of 10.575535774s, submitted: 100
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511000 session 0x55cbacc35680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 47407104 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484564992 unmapped: 45531136 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6aa0000 session 0x55cbadb094a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19b95b000/0x0/0x1bfc00000, data 0x69c3f25/0x6ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5472708 data_alloc: 285212672 data_used: 74412032
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485588992 unmapped: 44507136 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485588992 unmapped: 44507136 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485588992 unmapped: 44507136 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5400 session 0x55cbacf2a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbaf40cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 47759360 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 47759360 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c121000/0x0/0x1bfc00000, data 0x61ffe80/0x63db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5357375 data_alloc: 285212672 data_used: 66404352
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 47759360 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 47759360 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 47759360 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483385344 unmapped: 46710784 heap: 530096128 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.435277939s of 11.334116936s, submitted: 84
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 483737600 unmapped: 50561024 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c10d000/0x0/0x1bfc00000, data 0x6214ea9/0x63f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbacf2b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19c10d000/0x0/0x1bfc00000, data 0x6214ea9/0x63f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [0,0,2,4,2])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5541664 data_alloc: 285212672 data_used: 66408448
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486178816 unmapped: 48119808 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511000 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489144320 unmapped: 45154304 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6aa0000 session 0x55cbacffb4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbbadf5e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19aa24000/0x0/0x1bfc00000, data 0x78fdee2/0x7ada000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 488824832 unmapped: 45473792 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 490160128 unmapped: 44138496 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 44122112 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199d73000/0x0/0x1bfc00000, data 0x85a7f44/0x8785000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199d73000/0x0/0x1bfc00000, data 0x85a7f44/0x8785000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5693150 data_alloc: 285212672 data_used: 68403200
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 490209280 unmapped: 44089344 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbac610d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511000 session 0x55cbafb2f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbafb0c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbacf43400 session 0x55cbafaf0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbbadf43c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbaf5a4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbafb0cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511000 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade16c00 session 0x55cbaee81a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 490446848 unmapped: 43851776 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbacf2a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade16c00 session 0x55cbaee810e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbade8a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf723c00 session 0x55cbacf72960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511000 session 0x55cbaf7bc780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 490536960 unmapped: 43761664 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbb0b42780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbad910400 session 0x55cbaee654a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbad0301e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92000 session 0x55cbacc352c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19956f000/0x0/0x1bfc00000, data 0x8daefd9/0x8f8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf721000 session 0x55cbacf72960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 490545152 unmapped: 43753472 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf721000 session 0x55cbaee810e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x19956e000/0x0/0x1bfc00000, data 0x8daeffc/0x8f90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.288807869s of 10.129709244s, submitted: 292
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 491675648 unmapped: 42622976 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5859418 data_alloc: 285212672 data_used: 79540224
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 492142592 unmapped: 42156032 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495149056 unmapped: 39149568 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6a92000 session 0x55cbb0b423c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495198208 unmapped: 39100416 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71f800 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495214592 unmapped: 39084032 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511400 session 0x55cbacffb4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199547000/0x0/0x1bfc00000, data 0x8dd5ffc/0x8fb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee5800 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495239168 unmapped: 39059456 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199547000/0x0/0x1bfc00000, data 0x8dd5ffc/0x8fb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5907455 data_alloc: 301989888 data_used: 86003712
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495247360 unmapped: 39051264 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71f800 session 0x55cbacfde000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf721000 session 0x55cbacf1d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 495517696 unmapped: 38780928 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 499081216 unmapped: 35217408 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 499089408 unmapped: 35209216 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 499089408 unmapped: 35209216 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.236516953s of 11.107275009s, submitted: 18
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5945588 data_alloc: 301989888 data_used: 90710016
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 499286016 unmapped: 35012608 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x199546000/0x0/0x1bfc00000, data 0x8dd601f/0x8fb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509272064 unmapped: 25026560 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511000576 unmapped: 23298048 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 512851968 unmapped: 21446656 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 513269760 unmapped: 21028864 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6247766 data_alloc: 301989888 data_used: 92831744
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514318336 unmapped: 19980288 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514334720 unmapped: 19963904 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbafb0c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbaf54c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x1970d2000/0x0/0x1bfc00000, data 0xb24801f/0xb42a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514359296 unmapped: 19939328 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518561792 unmapped: 15736832 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518578176 unmapped: 15720448 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x1972a9000/0x0/0x1bfc00000, data 0xb06001f/0xb242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6227914 data_alloc: 301989888 data_used: 91602944
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516562944 unmapped: 17735680 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.139681339s of 10.301508904s, submitted: 500
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515923968 unmapped: 18374656 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaeee4c00 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x197225000/0x0/0x1bfc00000, data 0xb0f701f/0xb2d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaf54d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511400 session 0x55cbafaf1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516030464 unmapped: 18268160 heap: 534298624 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbb0b42000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbafb0c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbacf72000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517332992 unmapped: 24322048 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517341184 unmapped: 24313856 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbade12000 session 0x55cbaf799680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x196926000/0x0/0x1bfc00000, data 0xb9d101f/0xbbb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacdbcf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6302673 data_alloc: 301989888 data_used: 92168192
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517341184 unmapped: 24313856 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511400 session 0x55cbafb2ed20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbb6511400 session 0x55cbaf40c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517341184 unmapped: 24313856 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x196941000/0x0/0x1bfc00000, data 0xb9db01f/0xbbbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517349376 unmapped: 24305664 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaf5a5c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517349376 unmapped: 24305664 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x196921000/0x0/0x1bfc00000, data 0xb9fb01f/0xbbdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517349376 unmapped: 24305664 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1090519040 meta_used: 6364277 data_alloc: 301989888 data_used: 99790848
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522567680 unmapped: 19087360 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x196921000/0x0/0x1bfc00000, data 0xb9fb01f/0xbbdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaefa3800 session 0x55cbadd6b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522600448 unmapped: 19054592 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 heartbeat osd_stat(store_statfs(0x196921000/0x0/0x1bfc00000, data 0xb9fb01f/0xbbdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522600448 unmapped: 19054592 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 ms_handle_reset con 0x55cbaf71d400 session 0x55cbb2c1b4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.689538956s of 12.024483681s, submitted: 99
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbaf71f800 session 0x55cbafb2e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbade12000 session 0x55cbade8ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbaddc1800 session 0x55cbadd6a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacf2a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 heartbeat osd_stat(store_statfs(0x19691d000/0x0/0x1bfc00000, data 0xb9fcd42/0xbbe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522625024 unmapped: 19030016 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbb6511400 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 heartbeat osd_stat(store_statfs(0x19691a000/0x0/0x1bfc00000, data 0xb9ffd42/0xbbe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522657792 unmapped: 18997248 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6245897 data_alloc: 301989888 data_used: 91312128
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 521895936 unmapped: 19759104 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbaf71d400 session 0x55cbade8ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 521904128 unmapped: 19750912 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523231232 unmapped: 18423808 heap: 541655040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523321344 unmapped: 22536192 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbaddc1800 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523321344 unmapped: 22536192 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 heartbeat osd_stat(store_statfs(0x19669c000/0x0/0x1bfc00000, data 0xbc7dd6b/0xbe62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6317754 data_alloc: 301989888 data_used: 92504064
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523321344 unmapped: 22536192 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbade12000 session 0x55cbacf1d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523321344 unmapped: 22536192 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbaefa3800 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 heartbeat osd_stat(store_statfs(0x196698000/0x0/0x1bfc00000, data 0xbc80e06/0xbe66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523329536 unmapped: 22528000 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 heartbeat osd_stat(store_statfs(0x196698000/0x0/0x1bfc00000, data 0xbc80e06/0xbe66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbb6511400 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523329536 unmapped: 22528000 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523337728 unmapped: 22519808 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.751555443s of 11.978743553s, submitted: 61
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 ms_handle_reset con 0x55cbbbb89400 session 0x55cbacf730e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6319700 data_alloc: 301989888 data_used: 92495872
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523337728 unmapped: 22519808 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbb0b423c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaee810e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 ms_handle_reset con 0x55cbaefa3800 session 0x55cbb2c1a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523337728 unmapped: 22519808 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 ms_handle_reset con 0x55cbb6511400 session 0x55cbac610d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 heartbeat osd_stat(store_statfs(0x19668b000/0x0/0x1bfc00000, data 0xbc88baf/0xbe71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523345920 unmapped: 22511616 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 ms_handle_reset con 0x55cbb1691000 session 0x55cbaee654a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523517952 unmapped: 22339584 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 heartbeat osd_stat(store_statfs(0x19668d000/0x0/0x1bfc00000, data 0xbc88baf/0xbe71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526344192 unmapped: 19513344 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 heartbeat osd_stat(store_statfs(0x19668d000/0x0/0x1bfc00000, data 0xbc88baf/0xbe71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6419527 data_alloc: 301989888 data_used: 98254848
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526352384 unmapped: 19505152 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526352384 unmapped: 19505152 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526352384 unmapped: 19505152 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526368768 unmapped: 19488768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526368768 unmapped: 19488768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857947350s of 10.039239883s, submitted: 68
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x196687000/0x0/0x1bfc00000, data 0xbc8d7b8/0xbe77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbade12000 session 0x55cbacc352c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6420849 data_alloc: 301989888 data_used: 98275328
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526368768 unmapped: 19488768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x196687000/0x0/0x1bfc00000, data 0xbc8d7b8/0xbe77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d6ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526368768 unmapped: 19488768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaefa3800 session 0x55cbb0b43e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526368768 unmapped: 19488768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbacf2a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbb6511400 session 0x55cbaee65860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526589952 unmapped: 19267584 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaeee5800 session 0x55cbbadf43c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaeee5800 session 0x55cbb0b42780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522272768 unmapped: 23584768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6425510 data_alloc: 301989888 data_used: 92950528
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x196510000/0x0/0x1bfc00000, data 0xb9f5785/0xbbdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523845632 unmapped: 22011904 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523845632 unmapped: 22011904 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x195c0b000/0x0/0x1bfc00000, data 0xc2fa785/0xc4e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523862016 unmapped: 21995520 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbade12000 session 0x55cbadd6a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523862016 unmapped: 21995520 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaefa3800 session 0x55cbac60eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523862016 unmapped: 21995520 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.623912811s of 10.161190033s, submitted: 166
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbad910400 session 0x55cbaee5ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbb6a92000 session 0x55cbaf54cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbad910400 session 0x55cbaeb50d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x195b61000/0x0/0x1bfc00000, data 0xc3a5785/0xc58d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbade12000 session 0x55cbaf40cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6441911 data_alloc: 301989888 data_used: 92958720
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523796480 unmapped: 22061056 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaefa3800 session 0x55cbb0b42960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbadb09c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523804672 unmapped: 22052864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x195b5d000/0x0/0x1bfc00000, data 0xc3a8795/0xc591000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaeee8400 session 0x55cbafb83680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524361728 unmapped: 21495808 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbad910400 session 0x55cbacdbd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525418496 unmapped: 20439040 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbade12000 session 0x55cbafb914a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524369920 unmapped: 21487616 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaeee8400 session 0x55cbafb90000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6516173 data_alloc: 301989888 data_used: 98103296
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524369920 unmapped: 21487616 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524369920 unmapped: 21487616 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaf40d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbb076e400 session 0x55cbadd8fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x195b36000/0x0/0x1bfc00000, data 0xc3d26c1/0xc5b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519700480 unmapped: 26157056 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbad910400 session 0x55cbacfdfe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 heartbeat osd_stat(store_statfs(0x196a60000/0x0/0x1bfc00000, data 0xaf4462d/0xb127000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 ms_handle_reset con 0x55cbade12000 session 0x55cbadb09680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519716864 unmapped: 26140672 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 ms_handle_reset con 0x55cbaddc1800 session 0x55cbafb832c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519733248 unmapped: 26124288 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.194842339s of 10.705588341s, submitted: 87
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6357277 data_alloc: 301989888 data_used: 87298048
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519749632 unmapped: 26107904 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519766016 unmapped: 26091520 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 ms_handle_reset con 0x55cbade16c00 session 0x55cbadb085a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 ms_handle_reset con 0x55cbaf723c00 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519766016 unmapped: 26091520 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514908160 unmapped: 30949376 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 ms_handle_reset con 0x55cbad910400 session 0x55cbadab2b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 heartbeat osd_stat(store_statfs(0x1964c8000/0x0/0x1bfc00000, data 0xa46f32d/0xa652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 ms_handle_reset con 0x55cbaddc1800 session 0x55cbacc9e5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 heartbeat osd_stat(store_statfs(0x1964c8000/0x0/0x1bfc00000, data 0xa46f32d/0xa652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514760704 unmapped: 31096832 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6125040 data_alloc: 285212672 data_used: 77049856
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514768896 unmapped: 31088640 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbad030000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 ms_handle_reset con 0x55cbade12000 session 0x55cbade8ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 ms_handle_reset con 0x55cbade16c00 session 0x55cbaf799680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514801664 unmapped: 31055872 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 ms_handle_reset con 0x55cbad910400 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514809856 unmapped: 31047680 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 heartbeat osd_stat(store_statfs(0x1981f2000/0x0/0x1bfc00000, data 0x9d18042/0x9efb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514809856 unmapped: 31047680 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514809856 unmapped: 31047680 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6067992 data_alloc: 285212672 data_used: 77025280
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514834432 unmapped: 31023104 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 heartbeat osd_stat(store_statfs(0x1981cf000/0x0/0x1bfc00000, data 0x9d3c042/0x9f1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514834432 unmapped: 31023104 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.637764931s of 12.201923370s, submitted: 217
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514834432 unmapped: 31023104 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbacf64b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514842624 unmapped: 31014912 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514842624 unmapped: 31014912 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6089874 data_alloc: 285212672 data_used: 78557184
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x1981ca000/0x0/0x1bfc00000, data 0x9d3dc5b/0x9f23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514842624 unmapped: 31014912 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbade12000 session 0x55cbafb0d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514842624 unmapped: 31014912 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbaf7bdc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb08a4400 session 0x55cbaf5a4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbad910400 session 0x55cbacc9f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbafb823c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514842624 unmapped: 31014912 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbade12000 session 0x55cbaf7bcd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbacdbda40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbad910400 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaee65e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x1979a5000/0x0/0x1bfc00000, data 0xa560d2f/0xa749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6163295 data_alloc: 285212672 data_used: 78557184
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x1979a5000/0x0/0x1bfc00000, data 0xa560d2f/0xa749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x1979a2000/0x0/0x1bfc00000, data 0xa563d2f/0xa74c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbade12000 session 0x55cbacf732c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x1979a2000/0x0/0x1bfc00000, data 0xa563d2f/0xa74c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbafb2e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbac759000 session 0x55cbaf5a45a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.849233627s of 13.018359184s, submitted: 57
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb6510800 session 0x55cbafb0c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6162135 data_alloc: 285212672 data_used: 78561280
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514867200 unmapped: 30990336 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbac759000 session 0x55cbad03b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514875392 unmapped: 30982144 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x1979a1000/0x0/0x1bfc00000, data 0xa563d52/0xa74d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbafb2ef00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacf2ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbade12000 session 0x55cbafb82000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbac759000 session 0x55cbacf2b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516112384 unmapped: 29745152 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaee80f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacf2ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516112384 unmapped: 29745152 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516120576 unmapped: 29736960 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6260462 data_alloc: 301989888 data_used: 85737472
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516120576 unmapped: 29736960 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb6510800 session 0x55cbad03b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb1ed4c00 session 0x55cbacf732c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516120576 unmapped: 29736960 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbac759000 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516120576 unmapped: 29736960 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbacdbda40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x197664000/0x0/0x1bfc00000, data 0xa89fdb4/0xaa8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaefa3800 session 0x55cbaf7bcd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516128768 unmapped: 29728768 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x197664000/0x0/0x1bfc00000, data 0xa89fdb4/0xaa8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 513826816 unmapped: 32030720 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6273378 data_alloc: 301989888 data_used: 87695360
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 513826816 unmapped: 32030720 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaf715c00 session 0x55cbacc9f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.318463326s of 10.513891220s, submitted: 78
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb6510000 session 0x55cbaf5a4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x197664000/0x0/0x1bfc00000, data 0xa89fdb4/0xaa8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbac759000 session 0x55cbaeb51e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514007040 unmapped: 31850496 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaefa3800 session 0x55cbadb09a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514031616 unmapped: 31825920 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514031616 unmapped: 31825920 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517062656 unmapped: 28794880 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6390853 data_alloc: 301989888 data_used: 87961600
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518782976 unmapped: 27074560 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x19675a000/0x0/0x1bfc00000, data 0xb7a8db4/0xb993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518717440 unmapped: 27140096 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518750208 unmapped: 27107328 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518750208 unmapped: 27107328 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518750208 unmapped: 27107328 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6452283 data_alloc: 301989888 data_used: 88322048
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522731520 unmapped: 23126016 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.747689247s of 10.418425560s, submitted: 305
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523517952 unmapped: 22339584 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaf715c00 session 0x55cbafb2fc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x195f22000/0x0/0x1bfc00000, data 0xbfdbdb4/0xc1c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbb6510000 session 0x55cbaf40d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524566528 unmapped: 21291008 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbac759000 session 0x55cbaf7994a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 ms_handle_reset con 0x55cbaddc1800 session 0x55cbade8a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 heartbeat osd_stat(store_statfs(0x195e83000/0x0/0x1bfc00000, data 0xc07adb4/0xc265000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524746752 unmapped: 21110784 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524746752 unmapped: 21110784 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6495883 data_alloc: 301989888 data_used: 91234304
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524820480 unmapped: 21037056 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 handle_osd_map epochs [365,365], i have 365, src has [1,365]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 ms_handle_reset con 0x55cbb6510000 session 0x55cbacdbc780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 ms_handle_reset con 0x55cbafafb000 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524828672 unmapped: 21028864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 heartbeat osd_stat(store_statfs(0x195e5f000/0x0/0x1bfc00000, data 0xc0a0c6a/0xc28e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 heartbeat osd_stat(store_statfs(0x195e5f000/0x0/0x1bfc00000, data 0xc0a0c6a/0xc28e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524828672 unmapped: 21028864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 ms_handle_reset con 0x55cbaefa3800 session 0x55cbacf64780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 ms_handle_reset con 0x55cbaf715c00 session 0x55cbafb91e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 ms_handle_reset con 0x55cbac759000 session 0x55cbacfdfc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524828672 unmapped: 21028864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524828672 unmapped: 21028864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6494546 data_alloc: 301989888 data_used: 91242496
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524828672 unmapped: 21028864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 heartbeat osd_stat(store_statfs(0x195e63000/0x0/0x1bfc00000, data 0xc09dc47/0xc28a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949028015s of 10.213718414s, submitted: 101
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 ms_handle_reset con 0x55cbaddc1800 session 0x55cbafb832c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524828672 unmapped: 21028864 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 366 handle_osd_map epochs [366,366], i have 366, src has [1,366]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 366 ms_handle_reset con 0x55cbafafb000 session 0x55cbaf54c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 366 ms_handle_reset con 0x55cbb6510000 session 0x55cbafb0d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 366 heartbeat osd_stat(store_statfs(0x195e60000/0x0/0x1bfc00000, data 0xc09f9be/0xc28d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 366 ms_handle_reset con 0x55cbac759000 session 0x55cbaf54de00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525090816 unmapped: 20766720 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaee65860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525131776 unmapped: 20725760 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbaf715c00 session 0x55cbafb0c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbafafb000 session 0x55cbacf1c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525377536 unmapped: 20480000 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbaeee8400 session 0x55cbacf72f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6476862 data_alloc: 301989888 data_used: 89636864
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525385728 unmapped: 20471808 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbac759000 session 0x55cbacc9f4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525402112 unmapped: 20455424 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 heartbeat osd_stat(store_statfs(0x196f21000/0x0/0x1bfc00000, data 0xaa066ae/0xabf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525402112 unmapped: 20455424 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbaddc1800 session 0x55cbacffaf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525402112 unmapped: 20455424 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 heartbeat osd_stat(store_statfs(0x196f21000/0x0/0x1bfc00000, data 0xaa066ae/0xabf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbaf721000 session 0x55cbafb910e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbaf5a4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525418496 unmapped: 20439040 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5964104 data_alloc: 285212672 data_used: 68300800
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517160960 unmapped: 28696576 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 ms_handle_reset con 0x55cbaf715c00 session 0x55cbafb0d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517160960 unmapped: 28696576 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.360505104s of 10.961854935s, submitted: 292
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 heartbeat osd_stat(store_statfs(0x198e3c000/0x0/0x1bfc00000, data 0x90c868e/0x92b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbb6510800 session 0x55cbad92d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517160960 unmapped: 28696576 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 heartbeat osd_stat(store_statfs(0x198e39000/0x0/0x1bfc00000, data 0x90ca2cf/0x92b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbac759000 session 0x55cbafb910e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509394944 unmapped: 36462592 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaddc1800 session 0x55cbaf7994a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509493248 unmapped: 36364288 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 heartbeat osd_stat(store_statfs(0x199a6e000/0x0/0x1bfc00000, data 0x849526d/0x867f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaf721000 session 0x55cbaf40d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 heartbeat osd_stat(store_statfs(0x199a6e000/0x0/0x1bfc00000, data 0x849526d/0x867f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5842493 data_alloc: 285212672 data_used: 64978944
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaf71f000 session 0x55cbadb08f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509493248 unmapped: 36364288 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 heartbeat osd_stat(store_statfs(0x199a6e000/0x0/0x1bfc00000, data 0x849526d/0x867f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaf71f000 session 0x55cbadb09c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507109376 unmapped: 38748160 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507109376 unmapped: 38748160 heap: 545857536 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbac759000 session 0x55cbacc35c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaddc1800 session 0x55cbbadf45a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 heartbeat osd_stat(store_statfs(0x19b0cc000/0x0/0x1bfc00000, data 0x6e3824a/0x7021000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaf721000 session 0x55cbb2c1b4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbb6510800 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbb6511400 session 0x55cbaf7985a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaeee5800 session 0x55cbafb82d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbac759000 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaddc1800 session 0x55cbafb823c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaf71f000 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbb3952800 session 0x55cbacdbd4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbac60f4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacfa3680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbac759000 session 0x55cbacf2a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 52715520 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 ms_handle_reset con 0x55cbaddc1800 session 0x55cbacdc0780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 496713728 unmapped: 59645952 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5396922 data_alloc: 251658240 data_used: 45666304
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 496713728 unmapped: 59645952 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 ms_handle_reset con 0x55cbac759000 session 0x55cbad893c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 ms_handle_reset con 0x55cbb3952800 session 0x55cbb0b42000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 ms_handle_reset con 0x55cbaf71f000 session 0x55cbad92c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 heartbeat osd_stat(store_statfs(0x19a46e000/0x0/0x1bfc00000, data 0x64f71c5/0x66de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbafb2e5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511442944 unmapped: 44916736 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 heartbeat osd_stat(store_statfs(0x199f7c000/0x0/0x1bfc00000, data 0x6de9ee8/0x6fd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 370 ms_handle_reset con 0x55cbaeee5800 session 0x55cbafb2fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.271806717s of 10.021796227s, submitted: 296
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511442944 unmapped: 44916736 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 370 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacdbd680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 370 ms_handle_reset con 0x55cbaf71d800 session 0x55cbaf7981e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 370 ms_handle_reset con 0x55cbaf71f000 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbac759000 session 0x55cbacffa960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbb3952800 session 0x55cbaf5a5c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511451136 unmapped: 44908544 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbaeee6400 session 0x55cbaf5a5e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbaf71f000 session 0x55cbaf7bda40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbafb2e1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511451136 unmapped: 44908544 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5557404 data_alloc: 268435456 data_used: 62844928
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511475712 unmapped: 44883968 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 heartbeat osd_stat(store_statfs(0x199f96000/0x0/0x1bfc00000, data 0x6dc9a23/0x6fb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 heartbeat osd_stat(store_statfs(0x199f96000/0x0/0x1bfc00000, data 0x6dc9a23/0x6fb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1ecaf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511492096 unmapped: 44867584 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbb6511400 session 0x55cbacf2b4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbaf721000 session 0x55cbacc34f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 501358592 unmapped: 55001088 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 72089600 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 ms_handle_reset con 0x55cbaf721000 session 0x55cbaf40c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484278272 unmapped: 72081408 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5056069 data_alloc: 251658240 data_used: 33705984
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484278272 unmapped: 72081408 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484278272 unmapped: 72081408 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 heartbeat osd_stat(store_statfs(0x19c6c3000/0x0/0x1bfc00000, data 0x3f199b1/0x4104000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484212736 unmapped: 72146944 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484229120 unmapped: 72130560 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484229120 unmapped: 72130560 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 heartbeat osd_stat(store_statfs(0x19ca36000/0x0/0x1bfc00000, data 0x3f1b5ba/0x4107000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.3 total, 600.0 interval#012Cumulative writes: 63K writes, 252K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s#012Cumulative WAL: 63K writes, 23K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 58.23 MB, 0.10 MB/s#012Interval WAL: 14K writes, 5564 syncs, 2.58 writes per sync, written: 0.06 GB, 0.10 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5060227 data_alloc: 251658240 data_used: 33718272
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484245504 unmapped: 72114176 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.030743599s of 14.384038925s, submitted: 187
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 484384768 unmapped: 71974912 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 487645184 unmapped: 68714496 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 487645184 unmapped: 68714496 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 heartbeat osd_stat(store_statfs(0x19c0f7000/0x0/0x1bfc00000, data 0x48535ba/0x4a3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 487645184 unmapped: 68714496 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5140565 data_alloc: 251658240 data_used: 34066432
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 69705728 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 69705728 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 69705728 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 heartbeat osd_stat(store_statfs(0x19c0f3000/0x0/0x1bfc00000, data 0x485f5ba/0x4a4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 69705728 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 69697536 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5141541 data_alloc: 251658240 data_used: 34324480
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 69697536 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbac759000 session 0x55cbacf2b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71d800 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 69795840 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.057139397s of 10.227844238s, submitted: 64
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 475586560 unmapped: 80773120 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacffa960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 475594752 unmapped: 80764928 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 heartbeat osd_stat(store_statfs(0x19d57a000/0x0/0x1bfc00000, data 0x3323587/0x350d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 475594752 unmapped: 80764928 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4892989 data_alloc: 234881024 data_used: 21250048
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 475594752 unmapped: 80764928 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 475594752 unmapped: 80764928 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacfdfe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71f000 session 0x55cbaee801e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbac759000 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacf2a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477765632 unmapped: 78594048 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71d800 session 0x55cbafb2e3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf721000 session 0x55cbacc35e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf721000 session 0x55cbb2c1b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbac759000 session 0x55cbaf5a52c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaeee6400 session 0x55cbafb91c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71d800 session 0x55cbacc354a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbb4ebc400 session 0x55cbacf730e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71f000 session 0x55cbaf5a5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 476176384 unmapped: 80183296 heap: 556359680 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbac759000 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaeee6400 session 0x55cbb0b42d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71d800 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf721000 session 0x55cbb2c1a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 heartbeat osd_stat(store_statfs(0x19cb3c000/0x0/0x1bfc00000, data 0x3e165f9/0x4002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [0,0,0,0,0,1,1,4])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf721000 session 0x55cbaee641e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 86310912 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbac759000 session 0x55cbaf40cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacdc1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5233675 data_alloc: 251658240 data_used: 29368320
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 86310912 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbafafb000 session 0x55cbafb91a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71d800 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf71d800 session 0x55cbadd6a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485376000 unmapped: 89440256 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.087786674s of 10.154989243s, submitted: 104
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbad910400 session 0x55cbadd8f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbb6511400 session 0x55cbaee641e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 92561408 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 ms_handle_reset con 0x55cbaf721000 session 0x55cbafb914a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 ms_handle_reset con 0x55cbaf721000 session 0x55cbacdc0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 ms_handle_reset con 0x55cbad910400 session 0x55cbaf54dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 ms_handle_reset con 0x55cbafafb000 session 0x55cbadd6a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 476872704 unmapped: 97943552 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 heartbeat osd_stat(store_statfs(0x19ced7000/0x0/0x1bfc00000, data 0x3a792bc/0x3c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 476872704 unmapped: 97943552 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4993812 data_alloc: 234881024 data_used: 25276416
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 476872704 unmapped: 97943552 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 476872704 unmapped: 97943552 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477208576 unmapped: 97607680 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 heartbeat osd_stat(store_statfs(0x19ced7000/0x0/0x1bfc00000, data 0x3a792bc/0x3c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477339648 unmapped: 97476608 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 heartbeat osd_stat(store_statfs(0x19ced7000/0x0/0x1bfc00000, data 0x3a792bc/0x3c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477339648 unmapped: 97476608 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5066932 data_alloc: 251658240 data_used: 35540992
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477339648 unmapped: 97476608 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477339648 unmapped: 97476608 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacfde5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 ms_handle_reset con 0x55cbac759000 session 0x55cbbadf5680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.331307411s of 10.487876892s, submitted: 66
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477339648 unmapped: 97476608 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477339648 unmapped: 97476608 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477356032 unmapped: 97460224 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 heartbeat osd_stat(store_statfs(0x19ced5000/0x0/0x1bfc00000, data 0x3a7aec5/0x3c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5069554 data_alloc: 251658240 data_used: 35540992
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477356032 unmapped: 97460224 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 heartbeat osd_stat(store_statfs(0x19ced5000/0x0/0x1bfc00000, data 0x3a7aec5/0x3c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f0bf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 477356032 unmapped: 97460224 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbac759000 session 0x55cbafb0cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbad910400 session 0x55cbafb2eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbaf71f000 session 0x55cbadb085a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbaf721000 session 0x55cbaf40d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbafafb000 session 0x55cbade8b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 478822400 unmapped: 95993856 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbafafb000 session 0x55cbafb0cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbac759000 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbad910400 session 0x55cbaf7bd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 ms_handle_reset con 0x55cbaf71f000 session 0x55cbaeb51680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485261312 unmapped: 89554944 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 heartbeat osd_stat(store_statfs(0x19a9cc000/0x0/0x1bfc00000, data 0x4de2f37/0x4fd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 88170496 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 heartbeat osd_stat(store_statfs(0x19a085000/0x0/0x1bfc00000, data 0x5729f37/0x5919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5308413 data_alloc: 251658240 data_used: 37081088
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 88096768 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb6511400 session 0x55cbaf5a4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485294080 unmapped: 89522176 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19a055000/0x0/0x1bfc00000, data 0x5756c7d/0x5948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb6511400 session 0x55cbbadf4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.448025703s of 10.102033615s, submitted: 203
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485482496 unmapped: 89333760 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19a02a000/0x0/0x1bfc00000, data 0x5780ca0/0x5973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 485490688 unmapped: 89325568 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 488841216 unmapped: 85975040 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbafafb000 session 0x55cbacdbc5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbacf2b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5437429 data_alloc: 268435456 data_used: 50675712
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb1ed5c00 session 0x55cbacf2ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 491896832 unmapped: 82919424 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaeee7c00 session 0x55cbacf2ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 491929600 unmapped: 82886656 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbafafb000 session 0x55cbacf2be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19a028000/0x0/0x1bfc00000, data 0x5783ca0/0x5976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaeee6400 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaf71d800 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb1ed5c00 session 0x55cbacf2bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 492978176 unmapped: 81838080 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbacf2a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaeee6400 session 0x55cbaf40de00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaf71d800 session 0x55cbaf40cb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbafafb000 session 0x55cbaf40d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb1ed5c00 session 0x55cbadab3a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb6511400 session 0x55cbacfa2780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5262795 data_alloc: 251658240 data_used: 38645760
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacf72960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaf71d800 session 0x55cbadb08f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbafafb000 session 0x55cbaf5a5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb1ed5c00 session 0x55cbaf5a5e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb6511400 session 0x55cbacf645a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbb6511400 session 0x55cbaf7bda40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaeee6400 session 0x55cbadd6a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19ad1d000/0x0/0x1bfc00000, data 0x4a92bbc/0x4c81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.311878204s of 10.267891884s, submitted: 339
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaf71d800 session 0x55cbb2c1be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbac759000 session 0x55cbb0b43e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaf721000 session 0x55cbad92c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 489168896 unmapped: 85647360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5338873 data_alloc: 268435456 data_used: 49045504
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 492511232 unmapped: 82305024 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbaf721000 session 0x55cbacffb4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497106944 unmapped: 77709312 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19be98000/0x0/0x1bfc00000, data 0x38e5bac/0x3ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497115136 unmapped: 77701120 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 ms_handle_reset con 0x55cbac759000 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497795072 unmapped: 77021184 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19bea6000/0x0/0x1bfc00000, data 0x3909bbc/0x3af8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497795072 unmapped: 77021184 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5148926 data_alloc: 251658240 data_used: 42061824
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497795072 unmapped: 77021184 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497795072 unmapped: 77021184 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497795072 unmapped: 77021184 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497795072 unmapped: 77021184 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.073127747s of 11.343592644s, submitted: 163
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497803264 unmapped: 77012992 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 heartbeat osd_stat(store_statfs(0x19bea6000/0x0/0x1bfc00000, data 0x3909bbc/0x3af8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5147176 data_alloc: 251658240 data_used: 42061824
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 375 handle_osd_map epochs [376,376], i have 376, src has [1,376]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497803264 unmapped: 77012992 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbaf71d800 session 0x55cbaee5a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 497803264 unmapped: 77012992 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504840192 unmapped: 69976064 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504053760 unmapped: 70762496 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504053760 unmapped: 70762496 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19b05b000/0x0/0x1bfc00000, data 0x474f8df/0x493f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5269529 data_alloc: 251658240 data_used: 43163648
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504061952 unmapped: 70754304 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19b056000/0x0/0x1bfc00000, data 0x47548df/0x4944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19b00a000/0x0/0x1bfc00000, data 0x478b8df/0x497b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5284573 data_alloc: 251658240 data_used: 43380736
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19b00a000/0x0/0x1bfc00000, data 0x478b8df/0x497b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.364234924s of 13.063589096s, submitted: 134
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504160256 unmapped: 70656000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19b020000/0x0/0x1bfc00000, data 0x478e8df/0x497e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504168448 unmapped: 70647808 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb6511400 session 0x55cbacf2a3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb1690800 session 0x55cbaf54c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb1690800 session 0x55cbad92da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbac759000 session 0x55cbacdbd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504389632 unmapped: 70426624 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19b020000/0x0/0x1bfc00000, data 0x478e8df/0x497e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbaf721000 session 0x55cbaf798f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbaf71d800 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb6511400 session 0x55cbac60e5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb6511400 session 0x55cbad03ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5356320 data_alloc: 251658240 data_used: 43384832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbac759000 session 0x55cbaeb503c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbaf71d800 session 0x55cbafb0cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504389632 unmapped: 70426624 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504397824 unmapped: 70418432 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 heartbeat osd_stat(store_statfs(0x19a7fe000/0x0/0x1bfc00000, data 0x4fad961/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504397824 unmapped: 70418432 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb1690800 session 0x55cbac60f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbafb2e3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504406016 unmapped: 70410240 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbac611860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbac759000 session 0x55cbacf2a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbb1690800 session 0x55cbad92d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 ms_handle_reset con 0x55cbaf721000 session 0x55cbb2c1b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 handle_osd_map epochs [377,377], i have 377, src has [1,377]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504569856 unmapped: 70246400 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf71d800 session 0x55cbacf2ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5391119 data_alloc: 251658240 data_used: 46166016
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505176064 unmapped: 69640192 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbad910400 session 0x55cbafb0c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf71f000 session 0x55cbadb09a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbac759000 session 0x55cbac610d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbafafb000 session 0x55cbacf723c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbb1ed5c00 session 0x55cbafb910e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaeee6400 session 0x55cbacf72f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbac759000 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502112256 unmapped: 72704000 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbad910400 session 0x55cbacc352c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf71f000 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbafafb000 session 0x55cbacfde5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.410407066s of 10.045905113s, submitted: 146
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbac759000 session 0x55cbaf40cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbad910400 session 0x55cbaf7bd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaeee6400 session 0x55cbafaf01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 heartbeat osd_stat(store_statfs(0x19bf5c000/0x0/0x1bfc00000, data 0x384f666/0x3a41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf71f000 session 0x55cbac610960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf721000 session 0x55cbafb2ef00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf721000 session 0x55cbafb2fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502358016 unmapped: 72458240 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502358016 unmapped: 72458240 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502358016 unmapped: 72458240 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5199800 data_alloc: 251658240 data_used: 35479552
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbaf71f000 session 0x55cbafb0d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502366208 unmapped: 72450048 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbb1690800 session 0x55cbaf5a4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502366208 unmapped: 72450048 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbac6114a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 ms_handle_reset con 0x55cbb5fe3800 session 0x55cbb2c1a5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502554624 unmapped: 72261632 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 heartbeat osd_stat(store_statfs(0x19b828000/0x0/0x1bfc00000, data 0x3f82699/0x4176000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502554624 unmapped: 72261632 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502562816 unmapped: 72253440 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5248038 data_alloc: 251658240 data_used: 40550400
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502562816 unmapped: 72253440 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 502562816 unmapped: 72253440 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.506701469s of 10.345028877s, submitted: 94
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506724352 unmapped: 68091904 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbb1690800 session 0x55cbafb2f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbaf798000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x19acea000/0x0/0x1bfc00000, data 0x4ab92a2/0x4cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacfde000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddc0c00 session 0x55cbacdc1a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507330560 unmapped: 67485696 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaf71ac00 session 0x55cbafb83860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadab2d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddc0c00 session 0x55cbad92d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbb1690800 session 0x55cbac60f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbb0b43a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507617280 unmapped: 67198976 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5413739 data_alloc: 251658240 data_used: 42668032
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507625472 unmapped: 67190784 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507633664 unmapped: 67182592 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x19a57e000/0x0/0x1bfc00000, data 0x52282b1/0x541e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507641856 unmapped: 67174400 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507641856 unmapped: 67174400 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507641856 unmapped: 67174400 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbadeb2000 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5430807 data_alloc: 251658240 data_used: 44474368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf54cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507641856 unmapped: 67174400 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddc0c00 session 0x55cbac610b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509714432 unmapped: 65101824 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbb1690800 session 0x55cbafb0d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.089967728s of 10.211197853s, submitted: 253
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509779968 unmapped: 65036288 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x199934000/0x0/0x1bfc00000, data 0x5e6a2e4/0x6062000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2025f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,0,2])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511672320 unmapped: 63143936 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510812160 unmapped: 64004096 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5587975 data_alloc: 268435456 data_used: 52076544
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511025152 unmapped: 63791104 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511025152 unmapped: 63791104 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511025152 unmapped: 63791104 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x19949d000/0x0/0x1bfc00000, data 0x5ef82e4/0x60f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x19949d000/0x0/0x1bfc00000, data 0x5ef82e4/0x60f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511025152 unmapped: 63791104 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511025152 unmapped: 63791104 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x19949e000/0x0/0x1bfc00000, data 0x5ef82e4/0x60f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5592161 data_alloc: 268435456 data_used: 52232192
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511025152 unmapped: 63791104 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x19949e000/0x0/0x1bfc00000, data 0x5ef82e4/0x60f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511033344 unmapped: 63782912 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511033344 unmapped: 63782912 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511033344 unmapped: 63782912 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x199496000/0x0/0x1bfc00000, data 0x5efd2e4/0x60f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511033344 unmapped: 63782912 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.505875587s of 13.038665771s, submitted: 50
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5593093 data_alloc: 268435456 data_used: 52224000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf1c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511074304 unmapped: 63741952 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x198b8f000/0x0/0x1bfc00000, data 0x68072e4/0x69ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515317760 unmapped: 59498496 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515563520 unmapped: 59252736 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516308992 unmapped: 58507264 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516308992 unmapped: 58507264 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5769015 data_alloc: 268435456 data_used: 54546432
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516308992 unmapped: 58507264 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbbe336400 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516464640 unmapped: 58351616 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x198247000/0x0/0x1bfc00000, data 0x714f2e4/0x7347000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517160960 unmapped: 57655296 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x198247000/0x0/0x1bfc00000, data 0x714f2e4/0x7347000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518078464 unmapped: 56737792 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518078464 unmapped: 56737792 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf2a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5819642 data_alloc: 268435456 data_used: 60051456
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518078464 unmapped: 56737792 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518078464 unmapped: 56737792 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.640688896s of 11.398405075s, submitted: 145
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbad910400 session 0x55cbaf5a50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518086656 unmapped: 56729600 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaddc0c00 session 0x55cbafb0c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518111232 unmapped: 56705024 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x7171336/0x7369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2066f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbb1690800 session 0x55cbadd8e000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbac759000 session 0x55cbacffa1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaf71f000 session 0x55cbaf799a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaeee6400 session 0x55cbadd6a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbaf721000 session 0x55cbaee64b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518111232 unmapped: 56705024 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5482284 data_alloc: 251658240 data_used: 43999232
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbad910400 session 0x55cbacf64000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514768896 unmapped: 60047360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 ms_handle_reset con 0x55cbac759000 session 0x55cbac610d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 378 handle_osd_map epochs [379,379], i have 379, src has [1,379]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 379 ms_handle_reset con 0x55cbac759000 session 0x55cbbadf52c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514777088 unmapped: 60039168 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 379 ms_handle_reset con 0x55cbaeee6400 session 0x55cbaf7990e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 379 ms_handle_reset con 0x55cbad910400 session 0x55cbafb901e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525295616 unmapped: 49520640 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 379 ms_handle_reset con 0x55cbaf71f000 session 0x55cbafb832c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 380 handle_osd_map epochs [380,380], i have 380, src has [1,380]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 380 heartbeat osd_stat(store_statfs(0x19a6a9000/0x0/0x1bfc00000, data 0x5d2b088/0x5f23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 380 ms_handle_reset con 0x55cbaf721000 session 0x55cbacf2b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523706368 unmapped: 51109888 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 381 ms_handle_reset con 0x55cbaf721000 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518963200 unmapped: 55853056 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 381 ms_handle_reset con 0x55cbaddc0400 session 0x55cbbadf4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5668797 data_alloc: 268435456 data_used: 54992896
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 381 ms_handle_reset con 0x55cbad0de400 session 0x55cbb2c1ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 381 ms_handle_reset con 0x55cbadeb3800 session 0x55cbaf54d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 381 ms_handle_reset con 0x55cbaeee7400 session 0x55cbaf7994a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518987776 unmapped: 55828480 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518782976 unmapped: 56033280 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.179440498s of 10.007057190s, submitted: 256
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518791168 unmapped: 56025088 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 382 heartbeat osd_stat(store_statfs(0x199756000/0x0/0x1bfc00000, data 0x6c80aea/0x6e78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 382 ms_handle_reset con 0x55cbaeee7400 session 0x55cbaee801e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510353408 unmapped: 64462848 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 382 ms_handle_reset con 0x55cbad0de400 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510377984 unmapped: 64438272 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5463958 data_alloc: 251658240 data_used: 35680256
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510377984 unmapped: 64438272 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 382 ms_handle_reset con 0x55cbaddc0400 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510386176 unmapped: 64430080 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511434752 unmapped: 63381504 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 heartbeat osd_stat(store_statfs(0x19ab05000/0x0/0x1bfc00000, data 0x58d0453/0x5ac8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb82960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbbe336400 session 0x55cbade8a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbbe336400 session 0x55cbacdc1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbad0de400 session 0x55cbafb0c3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510386176 unmapped: 64430080 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacf730e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 heartbeat osd_stat(store_statfs(0x19ab05000/0x0/0x1bfc00000, data 0x58d0453/0x5ac8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbadeb3800 session 0x55cbafb91680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaf721000 session 0x55cbafaf03c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 504037376 unmapped: 70778880 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddc0400 session 0x55cbaf54dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbad0de400 session 0x55cbafb0c3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277968 data_alloc: 251658240 data_used: 32718848
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505159680 unmapped: 69656576 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddbf800 session 0x55cbade8a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaeee7400 session 0x55cbaf5a4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbadeb3800 session 0x55cbb2c1ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505724928 unmapped: 69091328 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.185317993s of 10.702354431s, submitted: 165
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbafb2f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb68f8800 session 0x55cbafaf14a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbad0de400 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505733120 unmapped: 69083136 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddc0400 session 0x55cbbadf52c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505724928 unmapped: 69091328 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaeee7400 session 0x55cbacffa1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 heartbeat osd_stat(store_statfs(0x19b9e1000/0x0/0x1bfc00000, data 0x49f44b5/0x4bed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb6511400 session 0x55cbafb83680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaf72b000 session 0x55cbaee65e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 503701504 unmapped: 71114752 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 heartbeat osd_stat(store_statfs(0x19cc31000/0x0/0x1bfc00000, data 0x37a44c9/0x399d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [0,0,0,0,1,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf64960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb68f8800 session 0x55cbadab30e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbacdc0780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaeee7400 session 0x55cbbadf45a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 heartbeat osd_stat(store_statfs(0x19cc31000/0x0/0x1bfc00000, data 0x37a44c9/0x399d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddc0400 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4928792 data_alloc: 234881024 data_used: 18243584
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 499408896 unmapped: 75407360 heap: 574816256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb6511400 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaf72b000 session 0x55cbaf798000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacf2b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf799680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddbf800 session 0x55cbb2c1a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbad0de400 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddc0400 session 0x55cbad92c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaeee7400 session 0x55cbac611680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbad0de400 session 0x55cbacdbd680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506298368 unmapped: 76275712 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf40c3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506298368 unmapped: 76275712 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaddc0400 session 0x55cbadab3a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 heartbeat osd_stat(store_statfs(0x19cc04000/0x0/0x1bfc00000, data 0x37d3496/0x39ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbaf72b000 session 0x55cbadd6b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506667008 unmapped: 75907072 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506675200 unmapped: 75898880 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacf64b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5173627 data_alloc: 251658240 data_used: 32755712
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506691584 unmapped: 75882496 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb6511400 session 0x55cbade8af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 ms_handle_reset con 0x55cbb1691400 session 0x55cbbadf5e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 ms_handle_reset con 0x55cbaddc0400 session 0x55cbafaf0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 heartbeat osd_stat(store_statfs(0x19d304000/0x0/0x1bfc00000, data 0x30d0230/0x32c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbafb82000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.027677536s of 11.400251389s, submitted: 189
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 ms_handle_reset con 0x55cbaddc0400 session 0x55cbac610960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 ms_handle_reset con 0x55cbb1691400 session 0x55cbac60eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 ms_handle_reset con 0x55cbb6511400 session 0x55cbade8af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5119421 data_alloc: 251658240 data_used: 29986816
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 heartbeat osd_stat(store_statfs(0x19d306000/0x0/0x1bfc00000, data 0x30d0220/0x32c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 500473856 unmapped: 82100224 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5248709 data_alloc: 251658240 data_used: 31068160
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 507584512 unmapped: 74989568 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 heartbeat osd_stat(store_statfs(0x19c2c9000/0x0/0x1bfc00000, data 0x4105220/0x42fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506732544 unmapped: 75841536 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506757120 unmapped: 75816960 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506757120 unmapped: 75816960 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506757120 unmapped: 75816960 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 heartbeat osd_stat(store_statfs(0x19c28d000/0x0/0x1bfc00000, data 0x4146e29/0x4340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5302181 data_alloc: 251658240 data_used: 34516992
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506757120 unmapped: 75816960 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506757120 unmapped: 75816960 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.678215981s of 13.062757492s, submitted: 171
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506765312 unmapped: 75808768 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506642432 unmapped: 75931648 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacfa3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaf71a000 session 0x55cbaee81a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaf71a000 session 0x55cbaee803c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacdc0f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbb1691400 session 0x55cbacf72000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbb6511400 session 0x55cbad0301e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafaf0000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafaf01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaddc0400 session 0x55cbb0b42d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506781696 unmapped: 75792384 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5391092 data_alloc: 251658240 data_used: 34549760
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506789888 unmapped: 75784192 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 heartbeat osd_stat(store_statfs(0x19b886000/0x0/0x1bfc00000, data 0x4b4de39/0x4d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbad0de400 session 0x55cbaf798d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506699776 unmapped: 75874304 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506699776 unmapped: 75874304 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbb68f8800 session 0x55cbacfde000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbbe336400 session 0x55cbb2c1b4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaf72b000 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadd8e000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 506707968 unmapped: 75866112 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 heartbeat osd_stat(store_statfs(0x19b882000/0x0/0x1bfc00000, data 0x4b50e39/0x4d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbad0de400 session 0x55cbacf64000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505479168 unmapped: 77094912 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbbe336400 session 0x55cbafb2ed20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140472 data_alloc: 234881024 data_used: 21975040
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505577472 unmapped: 76996608 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505577472 unmapped: 76996608 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 heartbeat osd_stat(store_statfs(0x19cc79000/0x0/0x1bfc00000, data 0x375adc7/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505577472 unmapped: 76996608 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505577472 unmapped: 76996608 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505577472 unmapped: 76996608 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5153112 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505430016 unmapped: 77144064 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 heartbeat osd_stat(store_statfs(0x19cc79000/0x0/0x1bfc00000, data 0x375adc7/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505372672 unmapped: 77201408 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505372672 unmapped: 77201408 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.189364433s of 16.860044479s, submitted: 133
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505372672 unmapped: 77201408 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 heartbeat osd_stat(store_statfs(0x19cc79000/0x0/0x1bfc00000, data 0x375adc7/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbaee652c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaf72b000 session 0x55cbafb90000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505372672 unmapped: 77201408 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 ms_handle_reset con 0x55cbaf71a000 session 0x55cbafaf0f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb1691400 session 0x55cbacdbc960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb0c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaddbf800 session 0x55cbac60f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbbadf4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf71a000 session 0x55cbaf40c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf72b000 session 0x55cbade8b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb1691400 session 0x55cbacc35680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5274300 data_alloc: 251658240 data_used: 31039488
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505405440 unmapped: 77168640 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf7bc3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbadb09860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf71a000 session 0x55cbacf645a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf72b000 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb1691400 session 0x55cbaee81680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505430016 unmapped: 77144064 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaeb505a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505495552 unmapped: 77078528 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x19c567000/0x0/0x1bfc00000, data 0x3e64c50/0x4065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505495552 unmapped: 77078528 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacfa2780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x19c567000/0x0/0x1bfc00000, data 0x3e64c50/0x4065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf71a000 session 0x55cbad92d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505495552 unmapped: 77078528 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x19c569000/0x0/0x1bfc00000, data 0x3e64c50/0x4065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf72b000 session 0x55cbaee5a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5300964 data_alloc: 251658240 data_used: 32813056
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505503744 unmapped: 77070336 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb6511400 session 0x55cbacc9f4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeee8800 session 0x55cbacfa3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbade16800 session 0x55cbad92c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacffa5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 505708544 unmapped: 76865536 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbaf7bcd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf71a000 session 0x55cbbadf5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 508149760 unmapped: 74424320 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.155324936s of 10.000619888s, submitted: 141
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbade16800 session 0x55cbb2c1a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 508657664 unmapped: 73916416 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbaee652c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeee8800 session 0x55cbaee81e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 509607936 unmapped: 72966144 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf72b000 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbacfab800 session 0x55cbacdc0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbade16800 session 0x55cbad893c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x19a3d5000/0x0/0x1bfc00000, data 0x4e4bd0e/0x5050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5521770 data_alloc: 251658240 data_used: 40062976
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510099456 unmapped: 72474624 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeee8800 session 0x55cbb2c1ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaf72b000 session 0x55cbaee645a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbafb814a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510099456 unmapped: 72474624 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x19a01e000/0x0/0x1bfc00000, data 0x520ccd5/0x540f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb315f800 session 0x55cbacc354a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbbe337800 session 0x55cbacf72f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb315f800 session 0x55cbb2c1b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbade16800 session 0x55cbaf5a5a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510337024 unmapped: 72237056 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510337024 unmapped: 72237056 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510533632 unmapped: 72040448 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5658301 data_alloc: 251658240 data_used: 40062976
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510533632 unmapped: 72040448 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x198ed2000/0x0/0x1bfc00000, data 0x6357d47/0x655c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x198ed2000/0x0/0x1bfc00000, data 0x6357d47/0x655c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510533632 unmapped: 72040448 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510541824 unmapped: 72032256 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510541824 unmapped: 72032256 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x198ed2000/0x0/0x1bfc00000, data 0x6357d47/0x655c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x207cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510541824 unmapped: 72032256 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5658301 data_alloc: 251658240 data_used: 40062976
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.302827835s of 12.173152924s, submitted: 128
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510787584 unmapped: 71786496 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbadb09a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeee8800 session 0x55cbb0b42780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514154496 unmapped: 68419584 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacdc1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515637248 unmapped: 66936832 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518774784 unmapped: 63799296 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x197269000/0x0/0x1bfc00000, data 0x6a0cd47/0x6c11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518774784 unmapped: 63799296 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb315f800 session 0x55cbafb814a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5805633 data_alloc: 268435456 data_used: 51154944
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518774784 unmapped: 63799296 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518774784 unmapped: 63799296 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbbe337800 session 0x55cbaee645a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518782976 unmapped: 63791104 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518782976 unmapped: 63791104 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x197267000/0x0/0x1bfc00000, data 0x6a11d57/0x6c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbafafb800 session 0x55cbaee81e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518791168 unmapped: 63782912 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5868846 data_alloc: 268435456 data_used: 57823232
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 523894784 unmapped: 58679296 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.098799706s of 10.459867477s, submitted: 104
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 528244736 unmapped: 54329344 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530956288 unmapped: 51617792 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530956288 unmapped: 51617792 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 heartbeat osd_stat(store_statfs(0x197241000/0x0/0x1bfc00000, data 0x6a37d57/0x6c3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 531013632 unmapped: 51560448 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5941322 data_alloc: 285212672 data_used: 67854336
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee65680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 533438464 unmapped: 49135616 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbbe337800 session 0x55cbad92d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 ms_handle_reset con 0x55cbb68f8800 session 0x55cbaeb51a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535273472 unmapped: 47300608 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 386 handle_osd_map epochs [387,387], i have 387, src has [1,387]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 ms_handle_reset con 0x55cbb0675c00 session 0x55cbaee81680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaee65860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbb0b42780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535461888 unmapped: 47112192 heap: 582574080 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 561332224 unmapped: 32595968 heap: 593928192 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 heartbeat osd_stat(store_statfs(0x19584e000/0x0/0x1bfc00000, data 0x8428a7a/0x862f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1,3,2,2])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacdbdc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538312704 unmapped: 59465728 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacdc1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 ms_handle_reset con 0x55cbb0675c00 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6254599 data_alloc: 285212672 data_used: 76455936
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 539467776 unmapped: 58310656 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.993648529s of 10.002191544s, submitted: 183
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 388 ms_handle_reset con 0x55cbb68f8800 session 0x55cbad03b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 388 ms_handle_reset con 0x55cbaddc0400 session 0x55cbb2c1a3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 388 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafaf14a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 539951104 unmapped: 57827328 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547422208 unmapped: 50356224 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 388 heartbeat osd_stat(store_statfs(0x1947b7000/0x0/0x1bfc00000, data 0x94b87f1/0x96c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,4,9])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 388 heartbeat osd_stat(store_statfs(0x1947b7000/0x0/0x1bfc00000, data 0x94b87f1/0x96c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,0,0,0,0,1,0,0,0,0,13])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbb0675c00 session 0x55cbafb82d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafaf1e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbade8a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbaf7bcf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536346624 unmapped: 61431808 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddbf800 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddc0400 session 0x55cbafb2f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 heartbeat osd_stat(store_statfs(0x1953e4000/0x0/0x1bfc00000, data 0x8572530/0x877b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc35c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbb0675c00 session 0x55cbac60fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6180408 data_alloc: 285212672 data_used: 71872512
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb2f860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddc0400 session 0x55cbad03ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 heartbeat osd_stat(store_statfs(0x1956f5000/0x0/0x1bfc00000, data 0x858150d/0x8789000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 390 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf40cb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 390 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbafb0de00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538738688 unmapped: 59039744 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 390 heartbeat osd_stat(store_statfs(0x1956d9000/0x0/0x1bfc00000, data 0x859a116/0x87a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 390 handle_osd_map epochs [391,391], i have 391, src has [1,391]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbbe337800 session 0x55cbacdbde00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538787840 unmapped: 58990592 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacffa1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbb79ad000 session 0x55cbadd8f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaf72b000 session 0x55cbacffa960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf723c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5962050 data_alloc: 268435456 data_used: 53080064
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534847488 unmapped: 62930944 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbade17000 session 0x55cbacffb4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.128280640s of 10.215465546s, submitted: 330
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadab2b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534888448 unmapped: 62889984 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 heartbeat osd_stat(store_statfs(0x1966aa000/0x0/0x1bfc00000, data 0x75cbff8/0x77d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534888448 unmapped: 62889984 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbb315f800 session 0x55cbacf2a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddc0400 session 0x55cbadab3a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbade17000 session 0x55cbad92d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6013138 data_alloc: 268435456 data_used: 60469248
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 heartbeat osd_stat(store_statfs(0x1968ac000/0x0/0x1bfc00000, data 0x73cadf8/0x75d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536584192 unmapped: 61194240 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 heartbeat osd_stat(store_statfs(0x1968a3000/0x0/0x1bfc00000, data 0x73d1a01/0x75da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536584192 unmapped: 61194240 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaf72b000 session 0x55cbacf2b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadb085a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddc0400 session 0x55cbafb2e1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6016892 data_alloc: 268435456 data_used: 60477440
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536584192 unmapped: 61194240 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbade17000 session 0x55cbad8932c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbb79ad000 session 0x55cbb2c1b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbb315f800 session 0x55cbacf2a3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacdc0000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddc0400 session 0x55cbadd6a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbade17000 session 0x55cbaee803c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb2f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.930759430s of 10.084986687s, submitted: 47
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551075840 unmapped: 46702592 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551075840 unmapped: 46702592 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf7bc780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551444480 unmapped: 46333952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 553492480 unmapped: 44285952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 heartbeat osd_stat(store_statfs(0x1952db000/0x0/0x1bfc00000, data 0x8998a32/0x8ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6275807 data_alloc: 301989888 data_used: 83369984
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 550305792 unmapped: 47472640 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 393 ms_handle_reset con 0x55cbb79ad000 session 0x55cbadd8e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544849920 unmapped: 52928512 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 393 heartbeat osd_stat(store_statfs(0x196598000/0x0/0x1bfc00000, data 0x76da747/0x78e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544849920 unmapped: 52928512 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6115293 data_alloc: 285212672 data_used: 67870720
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 393 heartbeat osd_stat(store_statfs(0x196594000/0x0/0x1bfc00000, data 0x76df747/0x78ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.009008408s of 13.678808212s, submitted: 107
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6114365 data_alloc: 285212672 data_used: 67874816
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 393 heartbeat osd_stat(store_statfs(0x196592000/0x0/0x1bfc00000, data 0x76e0747/0x78eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 393 handle_osd_map epochs [394,394], i have 394, src has [1,394]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545087488 unmapped: 52690944 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196589000/0x0/0x1bfc00000, data 0x76e8350/0x78f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 52486144 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 52486144 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196589000/0x0/0x1bfc00000, data 0x76e8350/0x78f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6153235 data_alloc: 285212672 data_used: 70410240
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545308672 unmapped: 52469760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545308672 unmapped: 52469760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196589000/0x0/0x1bfc00000, data 0x76e8350/0x78f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545349632 unmapped: 52428800 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196585000/0x0/0x1bfc00000, data 0x76ed350/0x78f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196585000/0x0/0x1bfc00000, data 0x76ed350/0x78f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6149047 data_alloc: 285212672 data_used: 70406144
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.206936836s of 12.290732384s, submitted: 43
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545628160 unmapped: 52150272 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545660928 unmapped: 52117504 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaeb51a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x19637e000/0x0/0x1bfc00000, data 0x78f1550/0x7afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545660928 unmapped: 52117504 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb6a90c00 session 0x55cbaeb503c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6204428 data_alloc: 285212672 data_used: 70393856
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 52011008 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbadb09860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 52002816 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x1960c5000/0x0/0x1bfc00000, data 0x7bab2ee/0x7db6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbafb825a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb6a90c00 session 0x55cbadd8f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6190146 data_alloc: 285212672 data_used: 70373376
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.875689507s of 10.106302261s, submitted: 86
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 49373184 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x1960c4000/0x0/0x1bfc00000, data 0x7bae311/0x7dba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 548413440 unmapped: 49364992 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf730e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf0b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6045295 data_alloc: 285212672 data_used: 65011712
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf52c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaf71b000 session 0x55cbadd8f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbafb0de00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196d27000/0x0/0x1bfc00000, data 0x6f4c2e0/0x7156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545046528 unmapped: 52731904 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf7985a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddc0400 session 0x55cbaf7bc5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5848268 data_alloc: 268435456 data_used: 56311808
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf40c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbade17000 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197e95000/0x0/0x1bfc00000, data 0x5ddf2e0/0x5fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197e96000/0x0/0x1bfc00000, data 0x5ddf27e/0x5fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.425072670s of 12.876208305s, submitted: 82
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5909247 data_alloc: 268435456 data_used: 56352768
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538656768 unmapped: 59121664 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197c6f000/0x0/0x1bfc00000, data 0x600627e/0x620f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538656768 unmapped: 59121664 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 539738112 unmapped: 58040320 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 539738112 unmapped: 58040320 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbbe336400 session 0x55cbaf7990e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad0de400 session 0x55cbaf40c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197723000/0x0/0x1bfc00000, data 0x654927e/0x6752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbb0b421e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529760256 unmapped: 68018176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5676237 data_alloc: 251658240 data_used: 46186496
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529760256 unmapped: 68018176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529760256 unmapped: 68018176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x198b65000/0x0/0x1bfc00000, data 0x511124b/0x5318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddc0400 session 0x55cbac60f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529768448 unmapped: 68009984 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x511224b/0x5319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529776640 unmapped: 68001792 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbade17000 session 0x55cbacfa2780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbaf71b000 session 0x55cbbadf4f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529776640 unmapped: 68001792 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbbe336400 session 0x55cbacfa3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5680674 data_alloc: 251658240 data_used: 46305280
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.071531296s of 10.524563789s, submitted: 143
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad92c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529776640 unmapped: 68001792 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbacdbde00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf798000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 heartbeat osd_stat(store_statfs(0x198b57000/0x0/0x1bfc00000, data 0x5119c73/0x5321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbad910800 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacc35c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5684084 data_alloc: 251658240 data_used: 46305280
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 heartbeat osd_stat(store_statfs(0x198b57000/0x0/0x1bfc00000, data 0x5119c73/0x5321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530841600 unmapped: 66936832 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x198b59000/0x0/0x1bfc00000, data 0x511b87c/0x5324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 66928640 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb82d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x198b59000/0x0/0x1bfc00000, data 0x511b87c/0x5324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 66928640 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5687156 data_alloc: 251658240 data_used: 46305280
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.944666862s of 10.055811882s, submitted: 32
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbbe336400 session 0x55cbafaf14a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 66920448 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 66920448 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc0f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbad03b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 531988480 unmapped: 65789952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacffa5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbaf7981e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x19838e000/0x0/0x1bfc00000, data 0x58e787c/0x5af0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 65781760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbafb901e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 65773568 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x1983b2000/0x0/0x1bfc00000, data 0x58c387c/0x5acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbacc9e5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5739375 data_alloc: 251658240 data_used: 46194688
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 65773568 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafaf0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbb2c1b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbaf7bd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad92cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbb2c1be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbac60eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532291584 unmapped: 65486848 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb1690000 session 0x55cbacdc0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbadb09c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbacdbd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb6a91400 session 0x55cbafb82780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbafaf0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb901e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbad03b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532307968 unmapped: 65470464 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 65462272 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x197e52000/0x0/0x1bfc00000, data 0x5e218ee/0x602c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb6a91400 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 65462272 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeee8800 session 0x55cbaeb51860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbadd8f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbadb09860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5852778 data_alloc: 268435456 data_used: 54374400
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeee8800 session 0x55cbacdc1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583312988s of 10.015307426s, submitted: 97
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbaf7992c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532324352 unmapped: 65454080 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532324352 unmapped: 65454080 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532332544 unmapped: 65445888 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb1690000 session 0x55cbacdc0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x197e75000/0x0/0x1bfc00000, data 0x5dfd8fe/0x6009000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 68460544 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbafaf1e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 68460544 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbadd8f0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5552550 data_alloc: 251658240 data_used: 39444480
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 68460544 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc354a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb6a91400 session 0x55cbadd8e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x198cd3000/0x0/0x1bfc00000, data 0x4f9e88c/0x51a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeee8800 session 0x55cbaf7bc780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x1993a0000/0x0/0x1bfc00000, data 0x44d188c/0x46db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacdbde00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 80035840 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 80035840 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 80035840 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518529024 unmapped: 79249408 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b0d5000/0x0/0x1bfc00000, data 0x28a17a8/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb821e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5269632 data_alloc: 234881024 data_used: 24211456
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19aef5000/0x0/0x1bfc00000, data 0x2d8250f/0x2f88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.311944008s of 10.724480629s, submitted: 151
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19aee8000/0x0/0x1bfc00000, data 0x2d8f50f/0x2f95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19aee8000/0x0/0x1bfc00000, data 0x2d8f50f/0x2f95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5278122 data_alloc: 234881024 data_used: 24236032
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbacf730e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb6a91400 session 0x55cbaf54c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb6a91400 session 0x55cbaf54c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbade16800 session 0x55cbaf54dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacf730e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc354a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbade16800 session 0x55cbafaf1e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacdc0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a2b6000/0x0/0x1bfc00000, data 0x39bf128/0x3bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacdc1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb6a91400 session 0x55cbadb09860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1690000 session 0x55cbadd8f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5384906 data_alloc: 234881024 data_used: 24244224
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacf2a3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517799936 unmapped: 79978496 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad8932c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb6a91400 session 0x55cbacf2b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbafb0cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7985a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbad03ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517857280 unmapped: 79921152 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.473110199s of 10.862102509s, submitted: 99
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade16800 session 0x55cbaeb51860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517857280 unmapped: 79921152 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad03b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199424000/0x0/0x1bfc00000, data 0x485019a/0x4a5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518004736 unmapped: 79773696 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbbadf4f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbad92c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518004736 unmapped: 79773696 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbb0b425a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410732 data_alloc: 234881024 data_used: 22941696
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511631360 unmapped: 86147072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511631360 unmapped: 86147072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf54cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbb2c1be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade16800 session 0x55cbaee801e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511631360 unmapped: 86147072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a0b5000/0x0/0x1bfc00000, data 0x3bbd1cd/0x3dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511639552 unmapped: 86138880 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbafb0c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb0cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510623744 unmapped: 87154688 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5263850 data_alloc: 234881024 data_used: 15843328
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19ac67000/0x0/0x1bfc00000, data 0x2f6b1ad/0x3175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510623744 unmapped: 87154688 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 87171072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510484480 unmapped: 87293952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbaf7983c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacffaf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.106935501s of 11.330339432s, submitted: 72
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadd8e000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5240513 data_alloc: 234881024 data_used: 26144768
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19b92d000/0x0/0x1bfc00000, data 0x234817a/0x2550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19b92d000/0x0/0x1bfc00000, data 0x234817a/0x2550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19b92d000/0x0/0x1bfc00000, data 0x234817a/0x2550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5240513 data_alloc: 234881024 data_used: 26144768
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.731653214s of 10.003818512s, submitted: 100
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514678784 unmapped: 83099648 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514760704 unmapped: 83017728 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19abfd000/0x0/0x1bfc00000, data 0x2c6917a/0x2e71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5331677 data_alloc: 234881024 data_used: 27185152
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514760704 unmapped: 83017728 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbaf7bcd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbb0b42960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbb2c1ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526884864 unmapped: 74571776 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee81a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade16800 session 0x55cbbadf4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbad92c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacf2a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515047424 unmapped: 86409216 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb6a91400 session 0x55cbacdc0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacf2be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515047424 unmapped: 86409216 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacdc1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19967e000/0x0/0x1bfc00000, data 0x41e71dc/0x43f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee801e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515244032 unmapped: 86212608 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbb0b434a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5535977 data_alloc: 234881024 data_used: 27185152
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519135232 unmapped: 82321408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbb2c1b4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb82780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbafb2e1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbb2c1a3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb2c3d800 session 0x55cbacfdfe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee805a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaee64f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519233536 unmapped: 82223104 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbafb0cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb0d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3400 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacf72000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbadd8fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb0550c00 session 0x55cbafb834a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaf40d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515612672 unmapped: 85843968 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516751360 unmapped: 84705280 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.082863808s of 10.739765167s, submitted: 171
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb7cc5000 session 0x55cbb0b42960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984e9000/0x0/0x1bfc00000, data 0x5379271/0x5585000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516915200 unmapped: 84541440 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799410 data_alloc: 251658240 data_used: 42762240
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518610944 unmapped: 82845696 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 79323136 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf40cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 79323136 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbbadf45a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984e9000/0x0/0x1bfc00000, data 0x5379271/0x5585000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 79323136 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf7bd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 76374016 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacffa5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbafb0c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbafb82d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5875503 data_alloc: 268435456 data_used: 52334592
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525254656 unmapped: 76201984 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbb0b42780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb0550c00 session 0x55cbbadf4d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525271040 unmapped: 76185600 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1985ab000/0x0/0x1bfc00000, data 0x4fb52a4/0x51c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527613952 unmapped: 73842688 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527613952 unmapped: 73842688 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.840907097s of 10.001074791s, submitted: 46
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536084480 unmapped: 65372160 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6032979 data_alloc: 268435456 data_used: 60067840
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19781f000/0x0/0x1bfc00000, data 0x603d2a4/0x624b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 62570496 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538902528 unmapped: 62554112 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19777f000/0x0/0x1bfc00000, data 0x60cb2a4/0x62d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544604160 unmapped: 56852480 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbb2c1ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee81c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5893419 data_alloc: 251658240 data_used: 49594368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb90960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19816d000/0x0/0x1bfc00000, data 0x56f32a4/0x5901000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf54cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19816d000/0x0/0x1bfc00000, data 0x56f32a4/0x5901000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536043520 unmapped: 65413120 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.705724716s of 10.454721451s, submitted: 370
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 65478656 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199e3b000/0x0/0x1bfc00000, data 0x3a2720f/0x3c32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbaf5a4d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5545662 data_alloc: 234881024 data_used: 30437376
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 537288704 unmapped: 64167936 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1997f9000/0x0/0x1bfc00000, data 0x406120f/0x426c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbafb0c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 540704768 unmapped: 60751872 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbad92c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522182656 unmapped: 79273984 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199950000/0x0/0x1bfc00000, data 0x2d7516a/0x2f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522190848 unmapped: 79265792 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522190848 unmapped: 79265792 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19994a000/0x0/0x1bfc00000, data 0x2d7b16a/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5360857 data_alloc: 234881024 data_used: 22663168
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522199040 unmapped: 79257600 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbac610b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf5a4b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358729 data_alloc: 234881024 data_used: 22667264
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199949000/0x0/0x1bfc00000, data 0x2d7e16a/0x2f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.413434029s of 13.200506210s, submitted: 176
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbbadf5c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbb2c1ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a260000/0x0/0x1bfc00000, data 0x246815a/0x266e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5255618 data_alloc: 218103808 data_used: 18554880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5255618 data_alloc: 218103808 data_used: 18554880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a260000/0x0/0x1bfc00000, data 0x246815a/0x266e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbacdbd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbbadf4960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbad92d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbacdc1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.313246727s of 10.420093536s, submitted: 40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 521584640 unmapped: 79872000 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb2c1a3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283884 data_alloc: 218103808 data_used: 18554880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f8c000/0x0/0x1bfc00000, data 0x273c15a/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f8c000/0x0/0x1bfc00000, data 0x273c15a/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaee64d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbaf7983c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbac60eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf54c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5294522 data_alloc: 218103808 data_used: 19697664
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f8b000/0x0/0x1bfc00000, data 0x273c16a/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.298153877s of 10.742022514s, submitted: 11
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306910 data_alloc: 218103808 data_used: 21381120
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f89000/0x0/0x1bfc00000, data 0x273d16a/0x2944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f89000/0x0/0x1bfc00000, data 0x273d16a/0x2944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacdc01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf798000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacf64960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbafb90960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbacdc1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf5a45a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372099 data_alloc: 218103808 data_used: 21381120
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacf2a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbafb0d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbafb83a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1997e1000/0x0/0x1bfc00000, data 0x2ee41dc/0x30ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519413760 unmapped: 82042880 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524247040 unmapped: 77209600 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524263424 unmapped: 77193216 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x198b74000/0x0/0x1bfc00000, data 0x3b501ec/0x3d5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [1,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbacdbcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526614528 unmapped: 74842112 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb834a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacc34f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.667600632s of 11.097998619s, submitted: 137
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbb0b43c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526442496 unmapped: 76627968 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5551963 data_alloc: 234881024 data_used: 22306816
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacc9f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526442496 unmapped: 76627968 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526450688 unmapped: 76619776 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526327808 unmapped: 76742656 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1971a5000/0x0/0x1bfc00000, data 0x437e1fb/0x4589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb2e1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526295040 unmapped: 76775424 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526295040 unmapped: 76775424 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbad92c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5612472 data_alloc: 234881024 data_used: 29851648
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527351808 unmapped: 75718656 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19818d000/0x0/0x1bfc00000, data 0x33961fb/0x35a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafaf1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbadd6b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafaf1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf7bc3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb0550c00 session 0x55cbaee652c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbac60f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbac60fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.365715027s of 10.577359200s, submitted: 53
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf7bc3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb68f9800 session 0x55cbafaf1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbbadf4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5498166 data_alloc: 234881024 data_used: 26230784
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb0d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527523840 unmapped: 75546624 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbacdbc960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527572992 unmapped: 75497472 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197dbb000/0x0/0x1bfc00000, data 0x376622e/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197dbb000/0x0/0x1bfc00000, data 0x376622e/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 75489280 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee654a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 75489280 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbacbdd800 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbaf799680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529006592 unmapped: 74063872 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406708 data_alloc: 234881024 data_used: 27930624
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529006592 unmapped: 74063872 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb0da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x198b57000/0x0/0x1bfc00000, data 0x29be1a9/0x2bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529006592 unmapped: 74063872 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530644992 unmapped: 72425472 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbbadf4f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaf7bc960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529047552 unmapped: 74022912 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984a5000/0x0/0x1bfc00000, data 0x30741dc/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529047552 unmapped: 74022912 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5478205 data_alloc: 234881024 data_used: 28155904
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529055744 unmapped: 74014720 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529055744 unmapped: 74014720 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.036070824s of 12.483406067s, submitted: 129
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532774912 unmapped: 70295552 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984a5000/0x0/0x1bfc00000, data 0x30741dc/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaee643c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534011904 unmapped: 69058560 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534323200 unmapped: 68747264 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5578405 data_alloc: 234881024 data_used: 33165312
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534339584 unmapped: 68730880 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197cf8000/0x0/0x1bfc00000, data 0x38231dc/0x3a30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535912448 unmapped: 67158016 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535912448 unmapped: 67158016 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5636853 data_alloc: 251658240 data_used: 39923712
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197cdc000/0x0/0x1bfc00000, data 0x383d1dc/0x3a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.068302155s of 10.354182243s, submitted: 105
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197ce4000/0x0/0x1bfc00000, data 0x383d1dc/0x3a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5631953 data_alloc: 251658240 data_used: 39940096
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 67125248 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 67125248 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19793e000/0x0/0x1bfc00000, data 0x3be31dc/0x3df0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 67125248 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 541581312 unmapped: 63111168 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19689b000/0x0/0x1bfc00000, data 0x4c861dc/0x4e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,3,7])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542744576 unmapped: 61947904 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacdc0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807829 data_alloc: 251658240 data_used: 41074688
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542973952 unmapped: 61718528 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 61546496 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1967c1000/0x0/0x1bfc00000, data 0x4d601dc/0x4f6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 61546496 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.174084663s of 10.545331955s, submitted: 135
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1967c1000/0x0/0x1bfc00000, data 0x4d601dc/0x4f6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543219712 unmapped: 61472768 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543219712 unmapped: 61472768 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5849323 data_alloc: 251658240 data_used: 43442176
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaf5a4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543219712 unmapped: 61472768 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 61775872 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544317440 unmapped: 60375040 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19679c000/0x0/0x1bfc00000, data 0x4d841ff/0x4f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544317440 unmapped: 60375040 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544317440 unmapped: 60375040 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19679c000/0x0/0x1bfc00000, data 0x4d841ff/0x4f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5866080 data_alloc: 251658240 data_used: 46317568
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544489472 unmapped: 60203008 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544489472 unmapped: 60203008 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad893680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544489472 unmapped: 60203008 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.959457397s of 10.100363731s, submitted: 61
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 400 ms_handle_reset con 0x55cbb08a4800 session 0x55cbafb2ed20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544505856 unmapped: 60186624 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 60137472 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 400 heartbeat osd_stat(store_statfs(0x196796000/0x0/0x1bfc00000, data 0x4d87f22/0x4f97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5872762 data_alloc: 251658240 data_used: 46297088
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 400 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf5a43c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 60137472 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb0550400 session 0x55cbadd6b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb2f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbaeee4800 session 0x55cbacf2b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbacdc1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb08a4800 session 0x55cbacdc01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552828928 unmapped: 63340544 heap: 616169472 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb076f400 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad03b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 402 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 402 ms_handle_reset con 0x55cbaeee6800 session 0x55cbad8925a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 402 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf7bd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558546944 unmapped: 65331200 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 402 heartbeat osd_stat(store_statfs(0x194d4a000/0x0/0x1bfc00000, data 0x67cfa80/0x69e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,0,5])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbafb832c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbb2c1ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555393024 unmapped: 68485120 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbb2c1b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbb2c1b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbb2c1a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb076f400 session 0x55cbb2c1b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb076f400 session 0x55cbad0305a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbacf2b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555409408 unmapped: 68468736 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6170750 data_alloc: 251658240 data_used: 53186560
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556638208 unmapped: 67239936 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbb0b42d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbb0b430e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556638208 unmapped: 67239936 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556638208 unmapped: 67239936 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.424963951s of 10.320842743s, submitted: 248
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555974656 unmapped: 67903488 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 heartbeat osd_stat(store_statfs(0x19445d000/0x0/0x1bfc00000, data 0x70bb821/0x72d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbafb0d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafb0c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbac60e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555974656 unmapped: 67903488 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbacc9e5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4c00 session 0x55cbafb0cb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb076f400 session 0x55cbad92c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafaf0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaf7bc000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6169690 data_alloc: 251658240 data_used: 53256192
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbacfdfc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaee81860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaee81680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555974656 unmapped: 67903488 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbb2c1b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbad8925a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb076f400 session 0x55cbacdc0780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf7bd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbafb832c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee4800 session 0x55cbaf5a4d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacdbcf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555982848 unmapped: 67895296 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555982848 unmapped: 67895296 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbad910800 session 0x55cbacf64960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbacf64780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbad910800 session 0x55cbac6114a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556007424 unmapped: 67870720 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbade8b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x194457000/0x0/0x1bfc00000, data 0x70bd49c/0x72d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee4800 session 0x55cbaf40d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb79ad000 session 0x55cbb0b42f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556007424 unmapped: 67870720 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacf1cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb2fe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5814598 data_alloc: 251658240 data_used: 39763968
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbacfdfe00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551477248 unmapped: 72400896 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbacdc01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551796736 unmapped: 72081408 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551796736 unmapped: 72081408 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x19610b000/0x0/0x1bfc00000, data 0x4ff84cf/0x5213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551821312 unmapped: 72056832 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.301959991s of 10.479634285s, submitted: 84
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf7992c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546512896 unmapped: 77365248 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5741841 data_alloc: 234881024 data_used: 37466112
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196a09000/0x0/0x1bfc00000, data 0x463544e/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5756401 data_alloc: 251658240 data_used: 39563264
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196a09000/0x0/0x1bfc00000, data 0x463544e/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196a09000/0x0/0x1bfc00000, data 0x463544e/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.244743347s of 11.347191811s, submitted: 38
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5763340 data_alloc: 251658240 data_used: 39784448
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 74416128 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5822256 data_alloc: 251658240 data_used: 41963520
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaee81680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf5a4d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbb0b42f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbad03b680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54c3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbad92d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaf40c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb68f9000 session 0x55cbadb09c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195eb2000/0x0/0x1bfc00000, data 0x525345e/0x546c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5889166 data_alloc: 251658240 data_used: 42364928
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195eb2000/0x0/0x1bfc00000, data 0x525345e/0x546c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546373632 unmapped: 77504512 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5889166 data_alloc: 251658240 data_used: 42364928
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbaeb51c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546373632 unmapped: 77504512 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195eb2000/0x0/0x1bfc00000, data 0x525345e/0x546c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaf7bde00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546373632 unmapped: 77504512 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbafaf1a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.031064987s of 17.208616257s, submitted: 30
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbafaf0000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546381824 unmapped: 77496320 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195e8d000/0x0/0x1bfc00000, data 0x527746d/0x5491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 553263104 unmapped: 70615040 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb5fe3c00 session 0x55cbacf2a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbad03a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaee805a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbad03bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaeb501e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547479552 unmapped: 76398592 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6022662 data_alloc: 251658240 data_used: 48254976
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549560320 unmapped: 74317824 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x1955cc000/0x0/0x1bfc00000, data 0x5b3846d/0x5d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552304640 unmapped: 71573504 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552304640 unmapped: 71573504 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195398000/0x0/0x1bfc00000, data 0x5d6946d/0x5f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552304640 unmapped: 71573504 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 550281216 unmapped: 73596928 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb6aa0800 session 0x55cbafb2e3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058607 data_alloc: 251658240 data_used: 50548736
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x19539a000/0x0/0x1bfc00000, data 0x5d69490/0x5f84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 550281216 unmapped: 73596928 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb0da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551378944 unmapped: 72499200 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.871469498s of 10.079981804s, submitted: 45
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbb0550400 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbac6114a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbacf64960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 405 heartbeat osd_stat(store_statfs(0x19539a000/0x0/0x1bfc00000, data 0x5d69490/0x5f84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 557375488 unmapped: 66502656 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbaeee6c00 session 0x55cbacdbd680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 406 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaf40d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556244992 unmapped: 71835648 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbb0550400 session 0x55cbad892000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 557318144 unmapped: 70762496 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6320057 data_alloc: 268435456 data_used: 64593920
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 heartbeat osd_stat(store_statfs(0x193b8e000/0x0/0x1bfc00000, data 0x756eccb/0x778e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 559570944 unmapped: 68509696 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 heartbeat osd_stat(store_statfs(0x193043000/0x0/0x1bfc00000, data 0x80b3ccb/0x82d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbacf1d0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 560832512 unmapped: 67248128 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbaeee9800 session 0x55cbad03ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbb08a6000 session 0x55cbaf40cb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564174848 unmapped: 63905792 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563585024 unmapped: 64495616 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbafb2ed20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbb08a6000 session 0x55cbafb2e000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564658176 unmapped: 63422464 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 408 heartbeat osd_stat(store_statfs(0x19469a000/0x0/0x1bfc00000, data 0x6a629fc/0x6c82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6283380 data_alloc: 268435456 data_used: 66387968
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbad910800 session 0x55cbb2c1a1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbaeee4800 session 0x55cbb2c1a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564658176 unmapped: 63422464 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 handle_osd_map epochs [409,409], i have 409, src has [1,409]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb2f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb076f400 session 0x55cbac610b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbaeee6800 session 0x55cbb2c1b2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbad910800 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564690944 unmapped: 63389696 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564699136 unmapped: 63381504 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.496852875s of 10.482481003s, submitted: 310
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 heartbeat osd_stat(store_statfs(0x194e78000/0x0/0x1bfc00000, data 0x6286559/0x64a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567771136 unmapped: 60309504 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaf7bcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568074240 unmapped: 60006400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb08a6000 session 0x55cbad030000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6321417 data_alloc: 268435456 data_used: 63418368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 410 ms_handle_reset con 0x55cbaeee4800 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568082432 unmapped: 59998208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 410 heartbeat osd_stat(store_statfs(0x19493d000/0x0/0x1bfc00000, data 0x676021c/0x697b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568107008 unmapped: 59973632 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbad910800 session 0x55cbafaf0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567296000 unmapped: 60784640 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbb08a7000 session 0x55cbad92cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbb315f000 session 0x55cbafaf1e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb83680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 heartbeat osd_stat(store_statfs(0x1954c5000/0x0/0x1bfc00000, data 0x55d9b56/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbaeee4800 session 0x55cbaeb503c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567320576 unmapped: 60760064 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 413 ms_handle_reset con 0x55cbad910800 session 0x55cbaee654a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567328768 unmapped: 60751872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5921184 data_alloc: 251658240 data_used: 43716608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567328768 unmapped: 60751872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 heartbeat osd_stat(store_statfs(0x195c84000/0x0/0x1bfc00000, data 0x4e1a8a6/0x5035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567336960 unmapped: 60743680 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b43c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf14a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567336960 unmapped: 60743680 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.324023247s of 10.062682152s, submitted: 268
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbb08a7000 session 0x55cbad03be00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbb08a7000 session 0x55cbaf40c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbad910800 session 0x55cbadd6a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf5a5a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad031860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554483712 unmapped: 73596928 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf7990e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf54c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbad910800 session 0x55cbafb83a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4000 session 0x55cbad03b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4800 session 0x55cbadd6a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 73129984 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5812632 data_alloc: 234881024 data_used: 30158848
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 73129984 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 73129984 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x196831000/0x0/0x1bfc00000, data 0x48d0105/0x4aec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a7000 session 0x55cbafb0d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf52c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 73121792 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a7000 session 0x55cbafaf1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbaf54cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x196832000/0x0/0x1bfc00000, data 0x48d00f5/0x4aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549445632 unmapped: 78635008 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacdc1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197eb1000/0x0/0x1bfc00000, data 0x32510d2/0x346b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549445632 unmapped: 78635008 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5585426 data_alloc: 218103808 data_used: 21164032
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197eb1000/0x0/0x1bfc00000, data 0x32510d2/0x346b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad03a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549445632 unmapped: 78635008 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafb0d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbadd6a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549593088 unmapped: 78487552 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 74K writes, 299K keys, 74K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.06 MB/s#012Cumulative WAL: 74K writes, 27K syncs, 2.70 writes per sync, written: 0.30 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 47K keys, 11K commit groups, 1.0 writes per commit group, ingest: 54.15 MB, 0.09 MB/s#012Interval WAL: 11K writes, 4590 syncs, 2.57 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549593088 unmapped: 78487552 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549593088 unmapped: 78487552 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197e8f000/0x0/0x1bfc00000, data 0x32750d2/0x348f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650616 data_alloc: 234881024 data_used: 29179904
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.950669289s of 15.333333969s, submitted: 141
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197e8d000/0x0/0x1bfc00000, data 0x32760d2/0x3490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5651740 data_alloc: 234881024 data_used: 29241344
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197e8d000/0x0/0x1bfc00000, data 0x32760d2/0x3490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: mgrc ms_handle_reset ms_handle_reset con 0x55cbaf1b1400
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: mgrc handle_mgr_configure stats_period=5
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551968768 unmapped: 76111872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197d25000/0x0/0x1bfc00000, data 0x33df0d2/0x35f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5736426 data_alloc: 234881024 data_used: 29388800
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554901504 unmapped: 73179136 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555212800 unmapped: 72867840 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19729c000/0x0/0x1bfc00000, data 0x3e670d2/0x4081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555212800 unmapped: 72867840 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19729c000/0x0/0x1bfc00000, data 0x3e670d2/0x4081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555237376 unmapped: 72843264 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555237376 unmapped: 72843264 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5756062 data_alloc: 234881024 data_used: 29851648
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555237376 unmapped: 72843264 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.927907944s of 13.271549225s, submitted: 115
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19727c000/0x0/0x1bfc00000, data 0x3e880d2/0x40a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555040768 unmapped: 73039872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19727c000/0x0/0x1bfc00000, data 0x3e880d2/0x40a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555040768 unmapped: 73039872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbafb83a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbad031860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a7000 session 0x55cbafb83680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4800 session 0x55cbadb085a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacdbc960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc9f4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf54c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbb0b432c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbafb91a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaee645a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5561920 data_alloc: 218103808 data_used: 20439040
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19831a000/0x0/0x1bfc00000, data 0x2d1e0d2/0x2f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbbadf43c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbbe336c00 session 0x55cbadb09e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbadd8f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaeb505a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacf64780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542007296 unmapped: 86073344 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacdc0780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbaf40da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5337530 data_alloc: 218103808 data_used: 10829824
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542285824 unmapped: 85794816 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.337329865s of 13.674113274s, submitted: 121
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372354 data_alloc: 218103808 data_used: 15384576
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542285824 unmapped: 85794816 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbac60eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543342592 unmapped: 84738048 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbaf5a50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bcf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5371867 data_alloc: 218103808 data_used: 15384576
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5371867 data_alloc: 218103808 data_used: 15384576
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.320820808s of 10.345089912s, submitted: 7
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543358976 unmapped: 84721664 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543416320 unmapped: 84664320 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543440896 unmapped: 84639744 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 84606976 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372363 data_alloc: 218103808 data_used: 15388672
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbbe336c00 session 0x55cbafb823c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbade8ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372027 data_alloc: 218103808 data_used: 15388672
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbacdbd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.134310722s of 13.202730179s, submitted: 246
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb0672c00 session 0x55cbacdbc000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb83860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb076f400 session 0x55cbafb0da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5378533 data_alloc: 218103808 data_used: 15388672
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543645696 unmapped: 84434944 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543645696 unmapped: 84434944 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543645696 unmapped: 84434944 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf40d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacdbd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545996800 unmapped: 82083840 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1988b3000/0x0/0x1bfc00000, data 0x24410e6/0x265b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 84598784 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbaee641e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaf40c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b423c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb0c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306622 data_alloc: 218103808 data_used: 10833920
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacdbd860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbade8ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacf2ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbb0b43860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafb2ef00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199757000/0x0/0x1bfc00000, data 0x159e0d6/0x17b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bcf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.711401939s of 10.419783592s, submitted: 152
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf1a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543457280 unmapped: 84623360 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280d6/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5329199 data_alloc: 218103808 data_used: 10825728
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543408128 unmapped: 84672512 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaeb503c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaf40d860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543383552 unmapped: 84697088 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacdc0780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543383552 unmapped: 84697088 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbaee652c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacf645a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280c3/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 84541440 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 84541440 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5333477 data_alloc: 218103808 data_used: 10829824
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543334400 unmapped: 84746240 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543129600 unmapped: 84951040 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543129600 unmapped: 84951040 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacc34f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280c3/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.894233704s of 10.058961868s, submitted: 49
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdbd4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5349621 data_alloc: 218103808 data_used: 13037568
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbacdbdc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280c3/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf5a5680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacf2af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbaee64d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543506432 unmapped: 84574208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543506432 unmapped: 84574208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb315f000 session 0x55cbadd6a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a0a1000/0x0/0x1bfc00000, data 0x1c93125/0x1ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23caf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5404768 data_alloc: 218103808 data_used: 13045760
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 84606976 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 84606976 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a09d000/0x0/0x1bfc00000, data 0x1c94e48/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23caf9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 80248832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5508996 data_alloc: 218103808 data_used: 14086144
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x198464000/0x0/0x1bfc00000, data 0x2726e48/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.202218056s of 13.709792137s, submitted: 167
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbacc34f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbade8ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546799616 unmapped: 81281024 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546799616 unmapped: 81281024 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546799616 unmapped: 81281024 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5506968 data_alloc: 218103808 data_used: 14086144
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbacdbcb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacf723c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x19846b000/0x0/0x1bfc00000, data 0x2726eaa/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacc9e5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf72a800 session 0x55cbafb2ed20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf798960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbadab3860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafaf1a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf71e400 session 0x55cbacdc1680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 80994304 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbade12000 session 0x55cbafb2ef00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafb0c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf71e400 session 0x55cbb0b423c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbaee641e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafaf1860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5595395 data_alloc: 218103808 data_used: 14757888
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 80961536 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb0550800 session 0x55cbaf40d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbbadf50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdc0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb315f000 session 0x55cbafaf0b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547151872 unmapped: 84606976 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x196fc2000/0x0/0x1bfc00000, data 0x3bccf2c/0x3dec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbacc9f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf71e400 session 0x55cbbadf5c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacc35680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547176448 unmapped: 84582400 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547184640 unmapped: 84574208 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x196fe7000/0x0/0x1bfc00000, data 0x3ba8f1c/0x3dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdbd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.957546234s of 12.299890518s, submitted: 90
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc34b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547201024 unmapped: 84557824 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5721147 data_alloc: 218103808 data_used: 18468864
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x196fe5000/0x0/0x1bfc00000, data 0x3ba8f4f/0x3dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547201024 unmapped: 84557824 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9400 session 0x55cbacdc10e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb076e800 session 0x55cbaee80f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547217408 unmapped: 84541440 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafb83a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaf40c960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547356672 unmapped: 84402176 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbaeee9400 session 0x55cbadab3a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x196fe1000/0x0/0x1bfc00000, data 0x3baac87/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbb315f000 session 0x55cbac610d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547397632 unmapped: 84361216 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbafafa800 session 0x55cbb0b42d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5792626 data_alloc: 234881024 data_used: 34205696
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754d000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754d000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbaeee9400 session 0x55cbafb83680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754d000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbaeee9800 session 0x55cbbadf5c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551477248 unmapped: 80281600 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551477248 unmapped: 80281600 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.157319069s of 10.465465546s, submitted: 102
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbb315f000 session 0x55cbacdc10e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754e000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5794918 data_alloc: 234881024 data_used: 34267136
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 80273408 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 80273408 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x197549000/0x0/0x1bfc00000, data 0x364183e/0x3864000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbade16000 session 0x55cbaf5a5680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551493632 unmapped: 80265216 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee805a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaeee9400 session 0x55cbacdc0960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 560357376 unmapped: 71401472 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 561561600 unmapped: 70197248 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6028589 data_alloc: 234881024 data_used: 36491264
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564879360 unmapped: 66879488 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x193ea4000/0x0/0x1bfc00000, data 0x5b3e8a0/0x5d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564895744 unmapped: 66863104 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564895744 unmapped: 66863104 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564895744 unmapped: 66863104 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563970048 unmapped: 67788800 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x193e99000/0x0/0x1bfc00000, data 0x5b498a0/0x5d6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6118457 data_alloc: 234881024 data_used: 37609472
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x193e99000/0x0/0x1bfc00000, data 0x5b498a0/0x5d6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563970048 unmapped: 67788800 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563970048 unmapped: 67788800 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.216553688s of 11.862694740s, submitted: 307
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563986432 unmapped: 67772416 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacdbd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaeee9800 session 0x55cbad92cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5847102 data_alloc: 234881024 data_used: 23478272
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbafb2e1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbaee801e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x194d1d000/0x0/0x1bfc00000, data 0x439681b/0x45b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf7990e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x194d1d000/0x0/0x1bfc00000, data 0x439681b/0x45b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaeee4000 session 0x55cbbadf4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdbc5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaeee9400 session 0x55cbbadf5a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbad910800 session 0x55cbac6112c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf1e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafaf10e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568983552 unmapped: 76800000 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafb0c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 420 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaeb505a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 420 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b42f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 420 ms_handle_reset con 0x55cbad910800 session 0x55cbafb0da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569016320 unmapped: 76767232 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbaeee9800 session 0x55cbadab2d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5920924 data_alloc: 234881024 data_used: 36601856
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb0d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569040896 unmapped: 76742656 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbade17000 session 0x55cbb0b434a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbad910800 session 0x55cbb0b43680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b42b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 heartbeat osd_stat(store_statfs(0x195319000/0x0/0x1bfc00000, data 0x46cd069/0x48f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569057280 unmapped: 76726272 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569065472 unmapped: 76718080 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569778176 unmapped: 76005376 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.811343193s of 12.625464439s, submitted: 167
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565067776 unmapped: 80715776 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5651521 data_alloc: 218103808 data_used: 19386368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafb825a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x196c93000/0x0/0x1bfc00000, data 0x2d54c2f/0x2f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5654407 data_alloc: 218103808 data_used: 19390464
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x196c93000/0x0/0x1bfc00000, data 0x2d54c2f/0x2f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbafb0cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567312384 unmapped: 78471168 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbad910800 session 0x55cbaf5a4d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbade17000 session 0x55cbaf5a5c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf5a41e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb315f000 session 0x55cbafb2eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafb2f860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbad910800 session 0x55cbac611860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x196c93000/0x0/0x1bfc00000, data 0x2d54c2f/0x2f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb2c1ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb315f000 session 0x55cbbadf50e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.170543671s of 10.137235641s, submitted: 143
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 576528384 unmapped: 69255168 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbbe336400 session 0x55cbafb914a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5943804 data_alloc: 234881024 data_used: 38084608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbbe336400 session 0x55cbadd6a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573972480 unmapped: 77668352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbad910800 session 0x55cbacc9fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x1952a3000/0x0/0x1bfc00000, data 0x473dc3f/0x4963000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573972480 unmapped: 77668352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573980672 unmapped: 77660160 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf5a4d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 574136320 unmapped: 77504512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 422 handle_osd_map epochs [423,423], i have 423, src has [1,423]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 423 ms_handle_reset con 0x55cbb4ebdc00 session 0x55cbafb91e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807206 data_alloc: 234881024 data_used: 24678400
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 423 heartbeat osd_stat(store_statfs(0x19629a000/0x0/0x1bfc00000, data 0x374c977/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 423 heartbeat osd_stat(store_statfs(0x19629a000/0x0/0x1bfc00000, data 0x374c977/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807382 data_alloc: 234881024 data_used: 24678400
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.432376862s of 10.768292427s, submitted: 118
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196297000/0x0/0x1bfc00000, data 0x374e580/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5820080 data_alloc: 234881024 data_used: 25268224
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196298000/0x0/0x1bfc00000, data 0x374e580/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196298000/0x0/0x1bfc00000, data 0x374e580/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5828560 data_alloc: 234881024 data_used: 26120192
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.081981659s of 10.156385422s, submitted: 29
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564903936 unmapped: 86736896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564903936 unmapped: 86736896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc34b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbaf40d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565952512 unmapped: 85688320 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 ms_handle_reset con 0x55cbad910800 session 0x55cbad892f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196292000/0x0/0x1bfc00000, data 0x3754580/0x397c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564568064 unmapped: 87072768 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7983c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf7990e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564559872 unmapped: 87080960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5648825 data_alloc: 234881024 data_used: 21270528
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1973c1000/0x0/0x1bfc00000, data 0x26243d0/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564559872 unmapped: 87080960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1973c1000/0x0/0x1bfc00000, data 0x26243d0/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564559872 unmapped: 87080960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbad92c000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbafb2e3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbb2c1a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbb0b421e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564568064 unmapped: 87072768 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbbe336400 session 0x55cbaf799c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40d4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbafb82d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbacffb4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1973c1000/0x0/0x1bfc00000, data 0x26243d0/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564772864 unmapped: 86867968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564772864 unmapped: 86867968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745458 data_alloc: 234881024 data_used: 21278720
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.762641907s of 10.110259056s, submitted: 116
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbacffaf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaee5a780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564797440 unmapped: 86843392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbaee5a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbaee5b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196793000/0x0/0x1bfc00000, data 0x355b442/0x347b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbaee5b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbaee5bc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbb2c1a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbadb090e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb4ebc800 session 0x55cbac60f680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb0551400 session 0x55cbacf72f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf40de00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5800563 data_alloc: 234881024 data_used: 21397504
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196768000/0x0/0x1bfc00000, data 0x35854a4/0x34a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbacf72000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566124544 unmapped: 85516288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567107584 unmapped: 84533248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196482000/0x0/0x1bfc00000, data 0x386b4a4/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567107584 unmapped: 84533248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbaee81a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566984704 unmapped: 84656128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566984704 unmapped: 84656128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5883605 data_alloc: 234881024 data_used: 32722944
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196481000/0x0/0x1bfc00000, data 0x386b4c7/0x378d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5885205 data_alloc: 234881024 data_used: 33001472
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.597194672s of 15.748700142s, submitted: 40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568762368 unmapped: 82878464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbafb2fa40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568893440 unmapped: 82747392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195eb3000/0x0/0x1bfc00000, data 0x3a504c7/0x3945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569360384 unmapped: 82280448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569458688 unmapped: 82182144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5955807 data_alloc: 234881024 data_used: 34934784
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e16000/0x0/0x1bfc00000, data 0x3ae44c7/0x39d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e16000/0x0/0x1bfc00000, data 0x3ae44c7/0x39d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e05000/0x0/0x1bfc00000, data 0x3b044c7/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e05000/0x0/0x1bfc00000, data 0x3b044c7/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5946127 data_alloc: 234881024 data_used: 34942976
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.067697525s of 10.294413567s, submitted: 88
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570105856 unmapped: 81534976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570105856 unmapped: 81534976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54d680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbaf54da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570105856 unmapped: 81534976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf5a4780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570114048 unmapped: 81526784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5756880 data_alloc: 234881024 data_used: 25755648
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196cae000/0x0/0x1bfc00000, data 0x2c5c455/0x2b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571760640 unmapped: 79880192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5841448 data_alloc: 234881024 data_used: 26992640
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196437000/0x0/0x1bfc00000, data 0x34d3455/0x33c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.288564682s of 10.684764862s, submitted: 124
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196417000/0x0/0x1bfc00000, data 0x34f4455/0x33e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5844308 data_alloc: 234881024 data_used: 27033600
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaf726800 session 0x55cbafb0cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1963f2000/0x0/0x1bfc00000, data 0x3519455/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5844308 data_alloc: 234881024 data_used: 27033600
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbad892000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.988227844s of 10.064723969s, submitted: 18
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb4ebc800 session 0x55cbad893c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaf729000 session 0x55cbaf40dc20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbacf2a000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1963f2000/0x0/0x1bfc00000, data 0x3519455/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572227584 unmapped: 79413248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572227584 unmapped: 79413248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1963f2000/0x0/0x1bfc00000, data 0x3519432/0x340b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572227584 unmapped: 79413248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbbadf4f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5840280 data_alloc: 234881024 data_used: 27025408
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaf726800 session 0x55cbbadf5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572235776 unmapped: 79405056 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572235776 unmapped: 79405056 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbacf72960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572243968 unmapped: 79396864 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafaf1e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x19640c000/0x0/0x1bfc00000, data 0x31c9147/0x33f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacdc0000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaeee9800 session 0x55cbafb90b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40cb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5826202 data_alloc: 234881024 data_used: 27025408
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaf726800 session 0x55cbafb83860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaf726800 session 0x55cbaee65c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x1971d7000/0x0/0x1bfc00000, data 0x208b0b2/0x22b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.160833359s of 11.446761131s, submitted: 98
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbad910800 session 0x55cbb0b42f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x1971d7000/0x0/0x1bfc00000, data 0x208b0b2/0x22b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x1971d7000/0x0/0x1bfc00000, data 0x208b0b2/0x22b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 427 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf7bd0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5594847 data_alloc: 218103808 data_used: 13410304
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 428 heartbeat osd_stat(store_statfs(0x197546000/0x0/0x1bfc00000, data 0x208ea6f/0x22b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5597741 data_alloc: 218103808 data_used: 13451264
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565862400 unmapped: 85778432 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x197544000/0x0/0x1bfc00000, data 0x2090678/0x22b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565862400 unmapped: 85778432 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbac60f4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf729000 session 0x55cbafaf0780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565870592 unmapped: 85770240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.307628632s of 11.402697563s, submitted: 48
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf7bc5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565870592 unmapped: 85770240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5598335 data_alloc: 218103808 data_used: 13451264
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x197545000/0x0/0x1bfc00000, data 0x2090678/0x22b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565870592 unmapped: 85770240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565878784 unmapped: 85762048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbad910800 session 0x55cbb2c1b860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbafb910e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565911552 unmapped: 85729280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbbadf4f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbb315f000 session 0x55cbacf72960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718972 data_alloc: 218103808 data_used: 14905344
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718972 data_alloc: 218103808 data_used: 14905344
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718972 data_alloc: 218103808 data_used: 14905344
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.328927994s of 16.574344635s, submitted: 69
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbb315f000 session 0x55cbafb82000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566861824 unmapped: 84779008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566755328 unmapped: 84885504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5805093 data_alloc: 234881024 data_used: 26980352
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5808293 data_alloc: 234881024 data_used: 27578368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee5a5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbad910800 session 0x55cbafb2e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacc35e00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.822283745s of 11.913968086s, submitted: 16
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563961856 unmapped: 87678976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x1971b3000/0x0/0x1bfc00000, data 0x24216da/0x264b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,1,0,2,2])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 84418560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5766327 data_alloc: 234881024 data_used: 23625728
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567287808 unmapped: 84353024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567574528 unmapped: 84066304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196831000/0x0/0x1bfc00000, data 0x2d946da/0x2fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5790443 data_alloc: 234881024 data_used: 24244224
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196831000/0x0/0x1bfc00000, data 0x2d946da/0x2fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.886643410s of 11.222091675s, submitted: 145
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbacfaa400 session 0x55cbbadf54a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680c000/0x0/0x1bfc00000, data 0x2dc76da/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5786943 data_alloc: 234881024 data_used: 24264704
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5782023 data_alloc: 234881024 data_used: 24260608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5782023 data_alloc: 234881024 data_used: 24260608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567599104 unmapped: 84041728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567599104 unmapped: 84041728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567599104 unmapped: 84041728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5782023 data_alloc: 234881024 data_used: 24260608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf5a4b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaf7985a0
Nov 29 03:54:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 03:54:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2256066129' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.616521835s of 19.702690125s, submitted: 37
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5785182 data_alloc: 234881024 data_used: 24260608
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568672256 unmapped: 82968576 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5787206 data_alloc: 234881024 data_used: 24428544
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5787206 data_alloc: 234881024 data_used: 24428544
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.876774788s of 12.909799576s, submitted: 9
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568696832 unmapped: 82944000 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799914 data_alloc: 234881024 data_used: 25272320
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799914 data_alloc: 234881024 data_used: 25272320
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799914 data_alloc: 234881024 data_used: 25272320
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.725801468s of 15.923392296s, submitted: 22
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5798742 data_alloc: 234881024 data_used: 25268224
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196805000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196805000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5798742 data_alloc: 234881024 data_used: 25268224
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196805000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.484737396s of 11.501891136s, submitted: 12
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5802422 data_alloc: 234881024 data_used: 26091520
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbacf2b0e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbb315f000 session 0x55cbaf40d2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196808000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196808000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 handle_osd_map epochs [429,430], i have 430, src has [1,430]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 429 handle_osd_map epochs [430,430], i have 430, src has [1,430]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196803000/0x0/0x1bfc00000, data 0x2dce3fa/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,0,1])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807214 data_alloc: 234881024 data_used: 26099712
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbbadf4000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbadd6a960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196804000/0x0/0x1bfc00000, data 0x2dce3fa/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf40cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbbadf5a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569794560 unmapped: 81846272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbafb0da40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569802752 unmapped: 81838080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.845624447s of 10.049188614s, submitted: 29
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbaf726800 session 0x55cbac60f4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569802752 unmapped: 81838080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5806497 data_alloc: 234881024 data_used: 26103808
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569810944 unmapped: 81829888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196804000/0x0/0x1bfc00000, data 0x2dce3fa/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569819136 unmapped: 81821696 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbb315f000 session 0x55cbb0b43680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf5a52c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569827328 unmapped: 81813504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196038000/0x0/0x1bfc00000, data 0x359a3fa/0x37c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196038000/0x0/0x1bfc00000, data 0x359a3fa/0x37c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569827328 unmapped: 81813504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569827328 unmapped: 81813504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5867613 data_alloc: 234881024 data_used: 26107904
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196038000/0x0/0x1bfc00000, data 0x359a3fa/0x37c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 431 heartbeat osd_stat(store_statfs(0x196034000/0x0/0x1bfc00000, data 0x359c11d/0x37c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.457994461s of 10.397141457s, submitted: 19
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 432 ms_handle_reset con 0x55cbaf726800 session 0x55cbacf2ab40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569851904 unmapped: 81788928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5879925 data_alloc: 234881024 data_used: 26116096
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573767680 unmapped: 77873152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 433 ms_handle_reset con 0x55cbb5fe3000 session 0x55cbafb903c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbb0b43a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb076fc00 session 0x55cbaf798780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbacdc01e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566845440 unmapped: 84795392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566845440 unmapped: 84795392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafb825a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 heartbeat osd_stat(store_statfs(0x1955ee000/0x0/0x1bfc00000, data 0x3fd9f67/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaf726800 session 0x55cbaf798d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 heartbeat osd_stat(store_statfs(0x1955ef000/0x0/0x1bfc00000, data 0x401ffc9/0x420f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566861824 unmapped: 84779008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbacf64780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaf721000 session 0x55cbbadf4b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbacfab800 session 0x55cbaf40c5a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf5a5860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566009856 unmapped: 85630976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb08a6c00 session 0x55cbaf40cd20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5998362 data_alloc: 234881024 data_used: 26120192
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbac60e780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 heartbeat osd_stat(store_statfs(0x1955ef000/0x0/0x1bfc00000, data 0x401ffc9/0x420f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb315ec00 session 0x55cbafb90000
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafb0c1e0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfab800 session 0x55cbbadf5680
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb5fe3000 session 0x55cbad0305a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566493184 unmapped: 85147648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566517760 unmapped: 85123072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195300000/0x0/0x1bfc00000, data 0x430ad96/0x44fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6017676 data_alloc: 234881024 data_used: 26181632
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195300000/0x0/0x1bfc00000, data 0x430ad96/0x44fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.260416031s of 13.620246887s, submitted: 103
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6015596 data_alloc: 234881024 data_used: 26177536
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195301000/0x0/0x1bfc00000, data 0x430ad96/0x44fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6037728 data_alloc: 234881024 data_used: 27869184
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.571662903s of 10.598692894s, submitted: 6
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6038080 data_alloc: 234881024 data_used: 27869184
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb315ec00 session 0x55cbafb82b40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058429 data_alloc: 234881024 data_used: 30396416
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058077 data_alloc: 234881024 data_used: 30396416
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.164316177s of 14.216246605s, submitted: 13
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 84254720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573087744 unmapped: 78553088 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6222717 data_alloc: 234881024 data_used: 32174080
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x1943bd000/0x0/0x1bfc00000, data 0x54f6d96/0x543d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x1943bd000/0x0/0x1bfc00000, data 0x54f6d96/0x543d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6220577 data_alloc: 234881024 data_used: 32186368
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb3953800 session 0x55cbade8ba40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbaf716400 session 0x55cbacc34d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaee5af00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x19439c000/0x0/0x1bfc00000, data 0x551ad96/0x5461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573005824 unmapped: 78635008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbafb914a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb08a6c00 session 0x55cbafb2ef00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.559671402s of 10.041211128s, submitted: 174
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573005824 unmapped: 78635008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfab800 session 0x55cbafb90780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x19439d000/0x0/0x1bfc00000, data 0x551ad96/0x5461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573014016 unmapped: 78626816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573014016 unmapped: 78626816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6211869 data_alloc: 234881024 data_used: 32317440
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb08a6c00 session 0x55cbafaf1c20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573014016 unmapped: 78626816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 handle_osd_map epochs [436,436], i have 436, src has [1,436]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbaf716400 session 0x55cbafb2eb40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaee81a40
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbb315ec00 session 0x55cbaf7bd2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbb2c1b4a0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbb5fe3000 session 0x55cbadd8f2c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbacc34f00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbb08a7c00 session 0x55cbaf7bc3c0
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572301312 unmapped: 79339520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf54c780
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 heartbeat osd_stat(store_statfs(0x19659d000/0x0/0x1bfc00000, data 0x2ff157a/0x31df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572325888 unmapped: 79314944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 436 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbaf716400 session 0x55cbacf2ad20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 heartbeat osd_stat(store_statfs(0x1967f1000/0x0/0x1bfc00000, data 0x2e1f57a/0x300d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572309504 unmapped: 79331328 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 heartbeat osd_stat(store_statfs(0x1967ef000/0x0/0x1bfc00000, data 0x2ddb30d/0x300e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572309504 unmapped: 79331328 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5878214 data_alloc: 234881024 data_used: 27750400
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbad910800 session 0x55cbac60e960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf798960
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572317696 unmapped: 79323136 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf54cf00
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 heartbeat osd_stat(store_statfs(0x1967f1000/0x0/0x1bfc00000, data 0x2ddb2fd/0x300d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5875573 data_alloc: 234881024 data_used: 27754496
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934582710s of 12.426671028s, submitted: 170
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbafb91860
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572342272 unmapped: 79298560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 ms_handle_reset con 0x55cbb08a7c00 session 0x55cbacdc0d20
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571359232 unmapped: 80281600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571359232 unmapped: 80281600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'config diff' '{prefix=config diff}'
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'config show' '{prefix=config show}'
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 80879616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570114048 unmapped: 81526784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:30 np0005539552 ceph-osd[79800]: do_command 'log dump' '{prefix=log dump}'
Nov 29 03:54:30 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:54:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 03:54:30 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/688136817' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 03:54:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 03:54:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2633095153' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 03:54:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:31.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 03:54:31 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2925551841' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 03:54:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:31.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 03:54:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/886373980' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 03:54:32 np0005539552 nova_compute[233724]: 2025-11-29 08:54:32.647 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:32 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 03:54:32 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1825314028' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 03:54:32 np0005539552 nova_compute[233724]: 2025-11-29 08:54:32.765 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2565874475' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 03:54:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:33.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2755746502' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3229509102' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/708775938' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:33.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 03:54:33 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/414392398' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1699856678' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3753681445' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2173611324' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3912907191' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:35 np0005539552 systemd[1]: Starting Hostname Service...
Nov 29 03:54:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:35.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:35 np0005539552 systemd[1]: Started Hostname Service.
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1061645470' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2798010391' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3667126432' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:36 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3155968211' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:37.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1489636657' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:37 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:37 np0005539552 nova_compute[233724]: 2025-11-29 08:54:37.649 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:37 np0005539552 nova_compute[233724]: 2025-11-29 08:54:37.767 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:37.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:38 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 03:54:38 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4231412265' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3920069934' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/482518862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/482518862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:54:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:39.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2741961315' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 03:54:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4113946838' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 03:54:40 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:40 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 03:54:40 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1095811152' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 03:54:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 03:54:41 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1476298789' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 03:54:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 03:54:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2385460006' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 03:54:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 03:54:42 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/579284683' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 03:54:42 np0005539552 nova_compute[233724]: 2025-11-29 08:54:42.652 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:42 np0005539552 nova_compute[233724]: 2025-11-29 08:54:42.769 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 03:54:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/922074869' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 03:54:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:43.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:43 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:43 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 03:54:43 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3186071932' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 03:54:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 03:54:44 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/336420110' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 03:54:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 03:54:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2301565548' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 03:54:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:45.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 03:54:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433186858' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 03:54:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 03:54:46 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4078287116' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 03:54:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:47.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539552 ovs-appctl[332102]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:47 np0005539552 ovs-appctl[332108]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:47 np0005539552 ovs-appctl[332112]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:47 np0005539552 nova_compute[233724]: 2025-11-29 08:54:47.654 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:47 np0005539552 nova_compute[233724]: 2025-11-29 08:54:47.769 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 03:54:47 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3971358213' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 03:54:48 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 03:54:48 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3191343939' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 03:54:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:49.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 03:54:49 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3232428732' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 03:54:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 03:54:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/140723316' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 03:54:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:51.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:51.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2884220711' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539552 nova_compute[233724]: 2025-11-29 08:54:52.656 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4257651961' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539552 nova_compute[233724]: 2025-11-29 08:54:52.771 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 03:54:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/346729375' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:53.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/934821776' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:53.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:53 np0005539552 nova_compute[233724]: 2025-11-29 08:54:53.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:53 np0005539552 nova_compute[233724]: 2025-11-29 08:54:53.984 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:53 np0005539552 nova_compute[233724]: 2025-11-29 08:54:53.985 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:53 np0005539552 nova_compute[233724]: 2025-11-29 08:54:53.986 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:53 np0005539552 nova_compute[233724]: 2025-11-29 08:54:53.986 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:54:53 np0005539552 nova_compute[233724]: 2025-11-29 08:54:53.986 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4055679448' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4089695785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.484 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.675 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.676 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3924MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.676 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.676 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.823 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.823 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:54:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:54 np0005539552 nova_compute[233724]: 2025-11-29 08:54:54.845 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2873767858' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/64560436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:55 np0005539552 nova_compute[233724]: 2025-11-29 08:54:55.299 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4089090962' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539552 nova_compute[233724]: 2025-11-29 08:54:55.305 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:54:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:55.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:55 np0005539552 nova_compute[233724]: 2025-11-29 08:54:55.378 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:54:55 np0005539552 nova_compute[233724]: 2025-11-29 08:54:55.381 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:54:55 np0005539552 nova_compute[233724]: 2025-11-29 08:54:55.382 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:55.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/987406356' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:57.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3730687348' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:57 np0005539552 nova_compute[233724]: 2025-11-29 08:54:57.660 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539552 nova_compute[233724]: 2025-11-29 08:54:57.774 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:58 np0005539552 podman[333926]: 2025-11-29 08:54:58.020526086 +0000 UTC m=+0.086866289 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:54:58 np0005539552 podman[333927]: 2025-11-29 08:54:58.163485865 +0000 UTC m=+0.236101647 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 03:54:58 np0005539552 podman[333925]: 2025-11-29 08:54:58.205373402 +0000 UTC m=+0.271793507 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 03:54:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2653878282' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:59.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:54:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2013202972' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:55:00 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 03:55:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 03:55:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3616578668' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 03:55:01 np0005539552 systemd[1]: Starting Time & Date Service...
Nov 29 03:55:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:01.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:01 np0005539552 nova_compute[233724]: 2025-11-29 08:55:01.383 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:01 np0005539552 nova_compute[233724]: 2025-11-29 08:55:01.384 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:01 np0005539552 nova_compute[233724]: 2025-11-29 08:55:01.385 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:55:01 np0005539552 systemd[1]: Started Time & Date Service.
Nov 29 03:55:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 03:55:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2043495551' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 03:55:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:01.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 03:55:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/623629289' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 03:55:02 np0005539552 nova_compute[233724]: 2025-11-29 08:55:02.664 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:02 np0005539552 nova_compute[233724]: 2025-11-29 08:55:02.777 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:03.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:03.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:03 np0005539552 nova_compute[233724]: 2025-11-29 08:55:03.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:05.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:07 np0005539552 nova_compute[233724]: 2025-11-29 08:55:07.667 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:07 np0005539552 nova_compute[233724]: 2025-11-29 08:55:07.778 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:07.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:07 np0005539552 nova_compute[233724]: 2025-11-29 08:55:07.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:55:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 14K writes, 76K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1642 writes, 8204 keys, 1642 commit groups, 1.0 writes per commit group, ingest: 17.00 MB, 0.03 MB/s#012Interval WAL: 1642 writes, 1642 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.8      3.30              0.31        48    0.069       0      0       0.0       0.0#012  L6      1/0   13.76 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.5    102.6     88.7      5.84              1.60        47    0.124    368K    25K       0.0       0.0#012 Sum      1/0   13.76 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.5     65.6     67.1      9.14              1.91        95    0.096    368K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    116.2    118.8      0.58              0.29        10    0.058     53K   2559       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    102.6     88.7      5.84              1.60        47    0.124    368K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     29.1      3.26              0.31        47    0.069       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 600.0 interval#012Flush(GB): cumulative 0.093, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.60 GB write, 0.10 MB/s write, 0.59 GB read, 0.10 MB/s read, 9.1 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 66.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000412 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4113,63.42 MB,20.8606%) FilterBlock(95,1012.86 KB,0.325369%) IndexBlock(95,1.67 MB,0.55012%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:55:08 np0005539552 nova_compute[233724]: 2025-11-29 08:55:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:08 np0005539552 nova_compute[233724]: 2025-11-29 08:55:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:09.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:09.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:12 np0005539552 nova_compute[233724]: 2025-11-29 08:55:12.671 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:12 np0005539552 nova_compute[233724]: 2025-11-29 08:55:12.779 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:12 np0005539552 nova_compute[233724]: 2025-11-29 08:55:12.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:12 np0005539552 nova_compute[233724]: 2025-11-29 08:55:12.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:55:12 np0005539552 nova_compute[233724]: 2025-11-29 08:55:12.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:55:12 np0005539552 nova_compute[233724]: 2025-11-29 08:55:12.948 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:55:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:13.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:14 np0005539552 nova_compute[233724]: 2025-11-29 08:55:14.943 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:15.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:17.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:17 np0005539552 nova_compute[233724]: 2025-11-29 08:55:17.674 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:17 np0005539552 nova_compute[233724]: 2025-11-29 08:55:17.782 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:19.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:55:20.665 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:55:20.665 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:55:20.666 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:21.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:22 np0005539552 nova_compute[233724]: 2025-11-29 08:55:22.677 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:22 np0005539552 nova_compute[233724]: 2025-11-29 08:55:22.783 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:23.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:25.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:26 np0005539552 nova_compute[233724]: 2025-11-29 08:55:26.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:27.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:27 np0005539552 nova_compute[233724]: 2025-11-29 08:55:27.681 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:27 np0005539552 nova_compute[233724]: 2025-11-29 08:55:27.784 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:27.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:28 np0005539552 podman[334689]: 2025-11-29 08:55:28.973884413 +0000 UTC m=+0.056339967 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:55:28 np0005539552 podman[334688]: 2025-11-29 08:55:28.98080496 +0000 UTC m=+0.062411981 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:55:29 np0005539552 podman[334690]: 2025-11-29 08:55:29.009739779 +0000 UTC m=+0.084767443 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:55:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:29.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:31.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539552 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 03:55:31 np0005539552 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.764420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531764520, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1223, "num_deletes": 251, "total_data_size": 2217627, "memory_usage": 2253192, "flush_reason": "Manual Compaction"}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531772271, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 1050502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76122, "largest_seqno": 77340, "table_properties": {"data_size": 1045003, "index_size": 2509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 17712, "raw_average_key_size": 23, "raw_value_size": 1032203, "raw_average_value_size": 1354, "num_data_blocks": 108, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406467, "oldest_key_time": 1764406467, "file_creation_time": 1764406531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 7884 microseconds, and 3693 cpu microseconds.
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.772316) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 1050502 bytes OK
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.772331) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.773884) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.773894) EVENT_LOG_v1 {"time_micros": 1764406531773891, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.773909) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2210843, prev total WAL file size 2210843, number of live WAL files 2.
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.774669) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353035' seq:72057594037927935, type:22 .. '6D6772737461740032373537' seq:0, type:0; will stop at (end)
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(1025KB)], [153(13MB)]
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531775359, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15484025, "oldest_snapshot_seqno": -1}
Nov 29 03:55:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:31.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 10976 keys, 12038354 bytes, temperature: kUnknown
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531880817, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 12038354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11970314, "index_size": 39511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 289658, "raw_average_key_size": 26, "raw_value_size": 11780649, "raw_average_value_size": 1073, "num_data_blocks": 1496, "num_entries": 10976, "num_filter_entries": 10976, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.881060) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 12038354 bytes
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.883468) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.8 rd, 114.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(26.2) write-amplify(11.5) OK, records in: 11464, records dropped: 488 output_compression: NoCompression
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.883488) EVENT_LOG_v1 {"time_micros": 1764406531883479, "job": 98, "event": "compaction_finished", "compaction_time_micros": 105507, "compaction_time_cpu_micros": 55318, "output_level": 6, "num_output_files": 1, "total_output_size": 12038354, "num_input_records": 11464, "num_output_records": 10976, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531883816, "job": 98, "event": "table_file_deletion", "file_number": 155}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531886664, "job": 98, "event": "table_file_deletion", "file_number": 153}
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.774447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.886741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.886748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.886753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.886757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:55:31.886761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539552 nova_compute[233724]: 2025-11-29 08:55:31.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:31 np0005539552 nova_compute[233724]: 2025-11-29 08:55:31.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:55:32 np0005539552 nova_compute[233724]: 2025-11-29 08:55:32.684 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:32 np0005539552 nova_compute[233724]: 2025-11-29 08:55:32.786 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:33.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:55:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:55:33 np0005539552 nova_compute[233724]: 2025-11-29 08:55:33.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:37.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:37 np0005539552 nova_compute[233724]: 2025-11-29 08:55:37.687 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:37 np0005539552 nova_compute[233724]: 2025-11-29 08:55:37.786 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:38 np0005539552 systemd-logind[788]: Session 71 logged out. Waiting for processes to exit.
Nov 29 03:55:38 np0005539552 systemd[1]: session-71.scope: Deactivated successfully.
Nov 29 03:55:38 np0005539552 systemd[1]: session-71.scope: Consumed 2min 42.462s CPU time, 956.1M memory peak, read 380.6M from disk, written 299.5M to disk.
Nov 29 03:55:38 np0005539552 systemd-logind[788]: Removed session 71.
Nov 29 03:55:39 np0005539552 systemd-logind[788]: New session 72 of user zuul.
Nov 29 03:55:39 np0005539552 systemd[1]: Started Session 72 of User zuul.
Nov 29 03:55:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:55:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036308638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:55:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:55:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036308638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:55:39 np0005539552 systemd[1]: session-72.scope: Deactivated successfully.
Nov 29 03:55:39 np0005539552 systemd-logind[788]: Session 72 logged out. Waiting for processes to exit.
Nov 29 03:55:39 np0005539552 systemd-logind[788]: Removed session 72.
Nov 29 03:55:39 np0005539552 systemd-logind[788]: New session 73 of user zuul.
Nov 29 03:55:39 np0005539552 systemd[1]: Started Session 73 of User zuul.
Nov 29 03:55:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:39.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:39 np0005539552 systemd[1]: session-73.scope: Deactivated successfully.
Nov 29 03:55:39 np0005539552 systemd-logind[788]: Session 73 logged out. Waiting for processes to exit.
Nov 29 03:55:39 np0005539552 systemd-logind[788]: Removed session 73.
Nov 29 03:55:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:39.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:41.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:42 np0005539552 nova_compute[233724]: 2025-11-29 08:55:42.690 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:42 np0005539552 nova_compute[233724]: 2025-11-29 08:55:42.788 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:43.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:43.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:44 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:47 np0005539552 nova_compute[233724]: 2025-11-29 08:55:47.694 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:47 np0005539552 nova_compute[233724]: 2025-11-29 08:55:47.790 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:47.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:49.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:55:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 80K writes, 319K keys, 80K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.06 MB/s#012Cumulative WAL: 80K writes, 29K syncs, 2.68 writes per sync, written: 0.32 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5435 writes, 20K keys, 5435 commit groups, 1.0 writes per commit group, ingest: 21.68 MB, 0.04 MB/s#012Interval WAL: 5435 writes, 2202 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 03:55:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:52 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:52 np0005539552 nova_compute[233724]: 2025-11-29 08:55:52.698 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:52 np0005539552 nova_compute[233724]: 2025-11-29 08:55:52.791 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:55.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.031 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.066 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.067 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.068 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.068 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.069 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1318026155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.506 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.722 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.724 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4059MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.725 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.725 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.832 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.832 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:55:56 np0005539552 nova_compute[233724]: 2025-11-29 08:55:56.857 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3704820395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.375 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.382 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.402 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.403 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.404 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:57.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.701 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.793 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:55:57 np0005539552 nova_compute[233724]: 2025-11-29 08:55:57.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:55:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:59.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:55:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:59 np0005539552 podman[335158]: 2025-11-29 08:55:59.985452726 +0000 UTC m=+0.071279130 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:55:59 np0005539552 podman[335157]: 2025-11-29 08:55:59.990690377 +0000 UTC m=+0.078244487 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:56:00 np0005539552 podman[335159]: 2025-11-29 08:56:00.017762596 +0000 UTC m=+0.100538098 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller)
Nov 29 03:56:00 np0005539552 nova_compute[233724]: 2025-11-29 08:56:00.940 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:01.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:01 np0005539552 nova_compute[233724]: 2025-11-29 08:56:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:01 np0005539552 nova_compute[233724]: 2025-11-29 08:56:01.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.089035) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562089128, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 557, "num_deletes": 255, "total_data_size": 833512, "memory_usage": 843896, "flush_reason": "Manual Compaction"}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562096656, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 550047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77345, "largest_seqno": 77897, "table_properties": {"data_size": 547097, "index_size": 921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6826, "raw_average_key_size": 18, "raw_value_size": 541200, "raw_average_value_size": 1478, "num_data_blocks": 40, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406532, "oldest_key_time": 1764406532, "file_creation_time": 1764406562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 7668 microseconds, and 4194 cpu microseconds.
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.096710) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 550047 bytes OK
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.096727) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099352) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099365) EVENT_LOG_v1 {"time_micros": 1764406562099360, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099378) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 830282, prev total WAL file size 830282, number of live WAL files 2.
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099769) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373635' seq:72057594037927935, type:22 .. '6C6F676D0033303136' seq:0, type:0; will stop at (end)
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(537KB)], [156(11MB)]
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562099801, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 12588401, "oldest_snapshot_seqno": -1}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10820 keys, 12447276 bytes, temperature: kUnknown
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562196466, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 12447276, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12379317, "index_size": 39814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 287363, "raw_average_key_size": 26, "raw_value_size": 12191450, "raw_average_value_size": 1126, "num_data_blocks": 1506, "num_entries": 10820, "num_filter_entries": 10820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.197025) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 12447276 bytes
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.198362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(45.5) write-amplify(22.6) OK, records in: 11342, records dropped: 522 output_compression: NoCompression
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.198391) EVENT_LOG_v1 {"time_micros": 1764406562198380, "job": 100, "event": "compaction_finished", "compaction_time_micros": 97009, "compaction_time_cpu_micros": 29993, "output_level": 6, "num_output_files": 1, "total_output_size": 12447276, "num_input_records": 11342, "num_output_records": 10820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562198573, "job": 100, "event": "table_file_deletion", "file_number": 158}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562200486, "job": 100, "event": "table_file_deletion", "file_number": 156}
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.200633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.200637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.200639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.200640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:56:02.200642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539552 nova_compute[233724]: 2025-11-29 08:56:02.706 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:02 np0005539552 nova_compute[233724]: 2025-11-29 08:56:02.795 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:03.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:05.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:05 np0005539552 nova_compute[233724]: 2025-11-29 08:56:05.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:07.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:07 np0005539552 nova_compute[233724]: 2025-11-29 08:56:07.709 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539552 nova_compute[233724]: 2025-11-29 08:56:07.797 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:07 np0005539552 nova_compute[233724]: 2025-11-29 08:56:07.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:09.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:09 np0005539552 nova_compute[233724]: 2025-11-29 08:56:09.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:10 np0005539552 nova_compute[233724]: 2025-11-29 08:56:10.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:11.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:12 np0005539552 nova_compute[233724]: 2025-11-29 08:56:12.712 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:12 np0005539552 nova_compute[233724]: 2025-11-29 08:56:12.798 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:13.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:13 np0005539552 nova_compute[233724]: 2025-11-29 08:56:13.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:13 np0005539552 nova_compute[233724]: 2025-11-29 08:56:13.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:56:13 np0005539552 nova_compute[233724]: 2025-11-29 08:56:13.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:56:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:13 np0005539552 nova_compute[233724]: 2025-11-29 08:56:13.953 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:56:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:15 np0005539552 nova_compute[233724]: 2025-11-29 08:56:15.948 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:17.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:17 np0005539552 nova_compute[233724]: 2025-11-29 08:56:17.714 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:17 np0005539552 nova_compute[233724]: 2025-11-29 08:56:17.799 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:17.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:19.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:56:20.666 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:56:20.667 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:56:20.667 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:21.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:21.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:22 np0005539552 nova_compute[233724]: 2025-11-29 08:56:22.718 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:22 np0005539552 nova_compute[233724]: 2025-11-29 08:56:22.801 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:23.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:24 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:25 np0005539552 nova_compute[233724]: 2025-11-29 08:56:25.772 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:25.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:27.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:27 np0005539552 nova_compute[233724]: 2025-11-29 08:56:27.722 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:27 np0005539552 nova_compute[233724]: 2025-11-29 08:56:27.803 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:27.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:29 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:29.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:30 np0005539552 podman[335285]: 2025-11-29 08:56:30.977441807 +0000 UTC m=+0.056196992 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:56:30 np0005539552 podman[335286]: 2025-11-29 08:56:30.978056004 +0000 UTC m=+0.049622206 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 29 03:56:31 np0005539552 podman[335290]: 2025-11-29 08:56:31.038458028 +0000 UTC m=+0.099697592 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:56:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:32 np0005539552 nova_compute[233724]: 2025-11-29 08:56:32.725 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:32 np0005539552 nova_compute[233724]: 2025-11-29 08:56:32.805 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:33.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:33.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:34 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:35.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:35.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539552 nova_compute[233724]: 2025-11-29 08:56:37.727 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:37 np0005539552 nova_compute[233724]: 2025-11-29 08:56:37.806 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:37.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:56:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230995328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:56:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:56:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230995328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:56:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:39.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:41.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:42 np0005539552 nova_compute[233724]: 2025-11-29 08:56:42.731 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:42 np0005539552 nova_compute[233724]: 2025-11-29 08:56:42.808 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:43.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:43.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:44 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:45.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:47.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:47 np0005539552 nova_compute[233724]: 2025-11-29 08:56:47.735 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:47 np0005539552 nova_compute[233724]: 2025-11-29 08:56:47.810 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:47.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:49.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:49 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:49.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:51.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:51 np0005539552 systemd[1]: Starting dnf makecache...
Nov 29 03:56:51 np0005539552 dnf[335433]: Metadata cache refreshed recently.
Nov 29 03:56:51 np0005539552 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 03:56:51 np0005539552 systemd[1]: Finished dnf makecache.
Nov 29 03:56:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:52 np0005539552 nova_compute[233724]: 2025-11-29 08:56:52.739 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:52 np0005539552 nova_compute[233724]: 2025-11-29 08:56:52.811 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:53.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:55.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:55 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:57.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.741 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.966 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.967 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.967 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:56:57 np0005539552 nova_compute[233724]: 2025-11-29 08:56:57.968 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1169295365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:58 np0005539552 nova_compute[233724]: 2025-11-29 08:56:58.484 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:58 np0005539552 nova_compute[233724]: 2025-11-29 08:56:58.634 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:56:58 np0005539552 nova_compute[233724]: 2025-11-29 08:56:58.635 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4071MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:56:58 np0005539552 nova_compute[233724]: 2025-11-29 08:56:58.635 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:58 np0005539552 nova_compute[233724]: 2025-11-29 08:56:58.635 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:56:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:59.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.093 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.094 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.142 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.182 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.182 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.202 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.228 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.247 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3105733722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.731 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:00 np0005539552 nova_compute[233724]: 2025-11-29 08:57:00.738 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:01.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:01 np0005539552 podman[335644]: 2025-11-29 08:57:01.993613493 +0000 UTC m=+0.081500993 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 03:57:02 np0005539552 podman[335643]: 2025-11-29 08:57:02.01878157 +0000 UTC m=+0.099983450 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 03:57:02 np0005539552 podman[335645]: 2025-11-29 08:57:02.053493764 +0000 UTC m=+0.131667413 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:57:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:02 np0005539552 nova_compute[233724]: 2025-11-29 08:57:02.655 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:02 np0005539552 nova_compute[233724]: 2025-11-29 08:57:02.658 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:57:02 np0005539552 nova_compute[233724]: 2025-11-29 08:57:02.659 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:02 np0005539552 nova_compute[233724]: 2025-11-29 08:57:02.744 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:02 np0005539552 nova_compute[233724]: 2025-11-29 08:57:02.815 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:03.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:03.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:05.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:06 np0005539552 nova_compute[233724]: 2025-11-29 08:57:06.661 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:06 np0005539552 nova_compute[233724]: 2025-11-29 08:57:06.662 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:06 np0005539552 nova_compute[233724]: 2025-11-29 08:57:06.663 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:57:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:07.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:07 np0005539552 nova_compute[233724]: 2025-11-29 08:57:07.748 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:07 np0005539552 nova_compute[233724]: 2025-11-29 08:57:07.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:07 np0005539552 nova_compute[233724]: 2025-11-29 08:57:07.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:07.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:09.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:09 np0005539552 nova_compute[233724]: 2025-11-29 08:57:09.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:09.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:11 np0005539552 nova_compute[233724]: 2025-11-29 08:57:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:11 np0005539552 nova_compute[233724]: 2025-11-29 08:57:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:11.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:12 np0005539552 nova_compute[233724]: 2025-11-29 08:57:12.751 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:12 np0005539552 nova_compute[233724]: 2025-11-29 08:57:12.820 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:13.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:15 np0005539552 nova_compute[233724]: 2025-11-29 08:57:15.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:15 np0005539552 nova_compute[233724]: 2025-11-29 08:57:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:57:15 np0005539552 nova_compute[233724]: 2025-11-29 08:57:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:57:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:15.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:16 np0005539552 nova_compute[233724]: 2025-11-29 08:57:16.009 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:57:17 np0005539552 nova_compute[233724]: 2025-11-29 08:57:17.005 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:17.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:17 np0005539552 nova_compute[233724]: 2025-11-29 08:57:17.753 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:17 np0005539552 nova_compute[233724]: 2025-11-29 08:57:17.822 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:17.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:19.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:19.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:57:20.668 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:57:20.668 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:57:20.669 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:21.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:22 np0005539552 nova_compute[233724]: 2025-11-29 08:57:22.757 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:22 np0005539552 nova_compute[233724]: 2025-11-29 08:57:22.825 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:23.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:24.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:25.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:27.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:27 np0005539552 nova_compute[233724]: 2025-11-29 08:57:27.759 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539552 nova_compute[233724]: 2025-11-29 08:57:27.827 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:29 np0005539552 nova_compute[233724]: 2025-11-29 08:57:29.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:31.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:32.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:32 np0005539552 nova_compute[233724]: 2025-11-29 08:57:32.762 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539552 nova_compute[233724]: 2025-11-29 08:57:32.828 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539552 podman[335825]: 2025-11-29 08:57:32.96881517 +0000 UTC m=+0.057973320 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:57:32 np0005539552 podman[335826]: 2025-11-29 08:57:32.991514201 +0000 UTC m=+0.079599302 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:57:33 np0005539552 podman[335827]: 2025-11-29 08:57:33.020326146 +0000 UTC m=+0.095679885 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:57:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:33.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:34.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:35.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:36.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:37.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:37 np0005539552 nova_compute[233724]: 2025-11-29 08:57:37.763 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:37 np0005539552 nova_compute[233724]: 2025-11-29 08:57:37.830 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:38.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:40.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:42.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:42 np0005539552 nova_compute[233724]: 2025-11-29 08:57:42.766 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:42 np0005539552 nova_compute[233724]: 2025-11-29 08:57:42.831 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:43.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:46.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:47.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:47 np0005539552 nova_compute[233724]: 2025-11-29 08:57:47.770 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:47 np0005539552 nova_compute[233724]: 2025-11-29 08:57:47.835 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:48.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:49.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:51.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:52.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:52 np0005539552 nova_compute[233724]: 2025-11-29 08:57:52.773 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:52 np0005539552 nova_compute[233724]: 2025-11-29 08:57:52.836 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:53.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:54.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:55.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:56.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:57.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.777 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.839 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.958 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.958 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.959 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:57:57 np0005539552 nova_compute[233724]: 2025-11-29 08:57:57.959 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:58.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1164050437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.418 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.673 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.675 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4074MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.675 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.675 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.947 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:57:58 np0005539552 nova_compute[233724]: 2025-11-29 08:57:58.948 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:57:59 np0005539552 nova_compute[233724]: 2025-11-29 08:57:59.059 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4119893673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:59 np0005539552 nova_compute[233724]: 2025-11-29 08:57:59.530 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:59 np0005539552 nova_compute[233724]: 2025-11-29 08:57:59.536 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:59 np0005539552 nova_compute[233724]: 2025-11-29 08:57:59.553 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:59 np0005539552 nova_compute[233724]: 2025-11-29 08:57:59.554 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:57:59 np0005539552 nova_compute[233724]: 2025-11-29 08:57:59.554 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:57:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:59.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:00.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.655057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680655120, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1369, "num_deletes": 251, "total_data_size": 3108816, "memory_usage": 3136368, "flush_reason": "Manual Compaction"}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680676305, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 2040821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77902, "largest_seqno": 79266, "table_properties": {"data_size": 2034972, "index_size": 3179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12429, "raw_average_key_size": 19, "raw_value_size": 2023247, "raw_average_value_size": 3242, "num_data_blocks": 141, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406562, "oldest_key_time": 1764406562, "file_creation_time": 1764406680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 21348 microseconds, and 9625 cpu microseconds.
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.676399) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 2040821 bytes OK
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.676427) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.678535) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.678558) EVENT_LOG_v1 {"time_micros": 1764406680678550, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.678581) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3102426, prev total WAL file size 3102426, number of live WAL files 2.
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.680144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(1992KB)], [159(11MB)]
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680680180, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 14488097, "oldest_snapshot_seqno": -1}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10927 keys, 12556321 bytes, temperature: kUnknown
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680816475, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12556321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12487775, "index_size": 40145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 290222, "raw_average_key_size": 26, "raw_value_size": 12297998, "raw_average_value_size": 1125, "num_data_blocks": 1515, "num_entries": 10927, "num_filter_entries": 10927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.816920) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12556321 bytes
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.829339) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.1 rd, 92.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.9 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.2) OK, records in: 11444, records dropped: 517 output_compression: NoCompression
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.829369) EVENT_LOG_v1 {"time_micros": 1764406680829356, "job": 102, "event": "compaction_finished", "compaction_time_micros": 136524, "compaction_time_cpu_micros": 58557, "output_level": 6, "num_output_files": 1, "total_output_size": 12556321, "num_input_records": 11444, "num_output_records": 10927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680830383, "job": 102, "event": "table_file_deletion", "file_number": 161}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680834599, "job": 102, "event": "table_file_deletion", "file_number": 159}
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.680038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.834738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.834744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.834747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.834750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-08:58:00.834753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:02.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:02 np0005539552 nova_compute[233724]: 2025-11-29 08:58:02.780 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:02 np0005539552 nova_compute[233724]: 2025-11-29 08:58:02.840 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:03 np0005539552 podman[336180]: 2025-11-29 08:58:03.992766837 +0000 UTC m=+0.074228157 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:58:04 np0005539552 podman[336181]: 2025-11-29 08:58:04.000980658 +0000 UTC m=+0.071008611 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:58:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:04 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:04 np0005539552 podman[336182]: 2025-11-29 08:58:04.039444512 +0000 UTC m=+0.104568813 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:58:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:04.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:04 np0005539552 nova_compute[233724]: 2025-11-29 08:58:04.555 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:04 np0005539552 nova_compute[233724]: 2025-11-29 08:58:04.556 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:04 np0005539552 nova_compute[233724]: 2025-11-29 08:58:04.557 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:58:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:06.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:07 np0005539552 nova_compute[233724]: 2025-11-29 08:58:07.785 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:07 np0005539552 nova_compute[233724]: 2025-11-29 08:58:07.842 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:58:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:08.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:58:08 np0005539552 nova_compute[233724]: 2025-11-29 08:58:08.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:10.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:11 np0005539552 nova_compute[233724]: 2025-11-29 08:58:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:11 np0005539552 nova_compute[233724]: 2025-11-29 08:58:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:11 np0005539552 nova_compute[233724]: 2025-11-29 08:58:11.925 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:12.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:12 np0005539552 nova_compute[233724]: 2025-11-29 08:58:12.788 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:12 np0005539552 nova_compute[233724]: 2025-11-29 08:58:12.844 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:14.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:15 np0005539552 nova_compute[233724]: 2025-11-29 08:58:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:15 np0005539552 nova_compute[233724]: 2025-11-29 08:58:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:58:15 np0005539552 nova_compute[233724]: 2025-11-29 08:58:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:58:15 np0005539552 nova_compute[233724]: 2025-11-29 08:58:15.942 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:58:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:16.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:17.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:17 np0005539552 nova_compute[233724]: 2025-11-29 08:58:17.791 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:17 np0005539552 nova_compute[233724]: 2025-11-29 08:58:17.846 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:17 np0005539552 nova_compute[233724]: 2025-11-29 08:58:17.937 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:19.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:58:20.669 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:58:20.670 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:58:20.670 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:21.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:22.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:22 np0005539552 nova_compute[233724]: 2025-11-29 08:58:22.795 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:22 np0005539552 nova_compute[233724]: 2025-11-29 08:58:22.849 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:23.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:24.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:26.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539552 nova_compute[233724]: 2025-11-29 08:58:27.798 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:27 np0005539552 nova_compute[233724]: 2025-11-29 08:58:27.850 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:28.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:29.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:31.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:32.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:32 np0005539552 nova_compute[233724]: 2025-11-29 08:58:32.801 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:32 np0005539552 nova_compute[233724]: 2025-11-29 08:58:32.851 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:33.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:34.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:35 np0005539552 podman[336363]: 2025-11-29 08:58:35.012512378 +0000 UTC m=+0.089162539 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:58:35 np0005539552 podman[336362]: 2025-11-29 08:58:35.023181505 +0000 UTC m=+0.102121687 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 29 03:58:35 np0005539552 podman[336364]: 2025-11-29 08:58:35.063939301 +0000 UTC m=+0.129852043 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:58:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:37.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:37 np0005539552 nova_compute[233724]: 2025-11-29 08:58:37.804 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:37 np0005539552 nova_compute[233724]: 2025-11-29 08:58:37.854 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:38.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:39.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:41.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:42 np0005539552 nova_compute[233724]: 2025-11-29 08:58:42.807 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:42 np0005539552 nova_compute[233724]: 2025-11-29 08:58:42.856 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:43.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:44.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:45.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:46.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:47.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:47 np0005539552 nova_compute[233724]: 2025-11-29 08:58:47.811 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:47 np0005539552 nova_compute[233724]: 2025-11-29 08:58:47.858 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:48.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:50.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:51 np0005539552 nova_compute[233724]: 2025-11-29 08:58:51.410 233728 DEBUG oslo_concurrency.processutils [None req-ff64d45f-ac0e-4f0e-bb3f-545ad70e0b2f 07d8fdc1f04d4769b5744eeac3a6f5f4 313f5427e3624aa189013c3cc05bee02 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:51 np0005539552 nova_compute[233724]: 2025-11-29 08:58:51.448 233728 DEBUG oslo_concurrency.processutils [None req-ff64d45f-ac0e-4f0e-bb3f-545ad70e0b2f 07d8fdc1f04d4769b5744eeac3a6f5f4 313f5427e3624aa189013c3cc05bee02 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:51.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:52.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:52 np0005539552 nova_compute[233724]: 2025-11-29 08:58:52.813 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:52 np0005539552 nova_compute[233724]: 2025-11-29 08:58:52.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:53.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:54.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:55.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:56.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:58:56.453 143400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:58:56 np0005539552 nova_compute[233724]: 2025-11-29 08:58:56.452 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:56 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:58:56.454 143400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:58:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:57.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:57 np0005539552 nova_compute[233724]: 2025-11-29 08:58:57.817 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:57 np0005539552 nova_compute[233724]: 2025-11-29 08:58:57.863 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:58.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:59 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:58:59.455 143400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=479f969f-dbf7-4938-8979-b8532eb113f6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:58:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:59.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:59 np0005539552 nova_compute[233724]: 2025-11-29 08:58:59.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:59 np0005539552 nova_compute[233724]: 2025-11-29 08:58:59.959 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:59 np0005539552 nova_compute[233724]: 2025-11-29 08:58:59.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:59 np0005539552 nova_compute[233724]: 2025-11-29 08:58:59.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:59 np0005539552 nova_compute[233724]: 2025-11-29 08:58:59.961 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:58:59 np0005539552 nova_compute[233724]: 2025-11-29 08:58:59.961 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:00.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2238622254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.421 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.610 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.613 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4084MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.613 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.614 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.679 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.680 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:59:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:00 np0005539552 nova_compute[233724]: 2025-11-29 08:59:00.760 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3975654025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:01 np0005539552 nova_compute[233724]: 2025-11-29 08:59:01.189 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:01 np0005539552 nova_compute[233724]: 2025-11-29 08:59:01.195 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:59:01 np0005539552 nova_compute[233724]: 2025-11-29 08:59:01.215 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:59:01 np0005539552 nova_compute[233724]: 2025-11-29 08:59:01.217 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:59:01 np0005539552 nova_compute[233724]: 2025-11-29 08:59:01.217 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:02 np0005539552 nova_compute[233724]: 2025-11-29 08:59:02.823 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:02 np0005539552 nova_compute[233724]: 2025-11-29 08:59:02.865 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:05.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:06 np0005539552 podman[336589]: 2025-11-29 08:59:06.022842244 +0000 UTC m=+0.103138255 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:59:06 np0005539552 podman[336590]: 2025-11-29 08:59:06.02305671 +0000 UTC m=+0.097615056 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:59:06 np0005539552 podman[336591]: 2025-11-29 08:59:06.06694298 +0000 UTC m=+0.134340354 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:59:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:06.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:06 np0005539552 nova_compute[233724]: 2025-11-29 08:59:06.219 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539552 nova_compute[233724]: 2025-11-29 08:59:06.220 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539552 nova_compute[233724]: 2025-11-29 08:59:06.220 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:59:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:59:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:07.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:59:07 np0005539552 nova_compute[233724]: 2025-11-29 08:59:07.827 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:07 np0005539552 nova_compute[233724]: 2025-11-29 08:59:07.866 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:08 np0005539552 nova_compute[233724]: 2025-11-29 08:59:08.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:10.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:12 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:12.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:12 np0005539552 nova_compute[233724]: 2025-11-29 08:59:12.830 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539552 nova_compute[233724]: 2025-11-29 08:59:12.868 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539552 nova_compute[233724]: 2025-11-29 08:59:12.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:12 np0005539552 nova_compute[233724]: 2025-11-29 08:59:12.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:13 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:13.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:13 np0005539552 nova_compute[233724]: 2025-11-29 08:59:13.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:14.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:15.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:15 np0005539552 nova_compute[233724]: 2025-11-29 08:59:15.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:15 np0005539552 nova_compute[233724]: 2025-11-29 08:59:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:59:15 np0005539552 nova_compute[233724]: 2025-11-29 08:59:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:59:15 np0005539552 nova_compute[233724]: 2025-11-29 08:59:15.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:59:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:16.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:17.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:17 np0005539552 nova_compute[233724]: 2025-11-29 08:59:17.833 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:17 np0005539552 nova_compute[233724]: 2025-11-29 08:59:17.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:17 np0005539552 nova_compute[233724]: 2025-11-29 08:59:17.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:18.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:18 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:19.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:59:20.670 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:59:20.670 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 08:59:20.671 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:21.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:22 np0005539552 nova_compute[233724]: 2025-11-29 08:59:22.836 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:22 np0005539552 nova_compute[233724]: 2025-11-29 08:59:22.872 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:23.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:24.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:25.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:26.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:27.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:27 np0005539552 nova_compute[233724]: 2025-11-29 08:59:27.839 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:27 np0005539552 nova_compute[233724]: 2025-11-29 08:59:27.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:28.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:29.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:30.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:31.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:32 np0005539552 nova_compute[233724]: 2025-11-29 08:59:32.842 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:32 np0005539552 nova_compute[233724]: 2025-11-29 08:59:32.877 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:33.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:33 np0005539552 nova_compute[233724]: 2025-11-29 08:59:33.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:35.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:37 np0005539552 podman[337023]: 2025-11-29 08:59:37.025404888 +0000 UTC m=+0.090944787 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:59:37 np0005539552 podman[337022]: 2025-11-29 08:59:37.030876305 +0000 UTC m=+0.100641808 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:59:37 np0005539552 podman[337024]: 2025-11-29 08:59:37.066967666 +0000 UTC m=+0.130116211 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 03:59:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:37.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:37 np0005539552 nova_compute[233724]: 2025-11-29 08:59:37.843 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:37 np0005539552 nova_compute[233724]: 2025-11-29 08:59:37.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:38.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:59:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3346117843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:59:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:59:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3346117843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:59:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:39.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:40.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:41.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:42.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:42 np0005539552 nova_compute[233724]: 2025-11-29 08:59:42.847 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:42 np0005539552 nova_compute[233724]: 2025-11-29 08:59:42.880 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:43.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:44.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:46.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:47.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:47 np0005539552 nova_compute[233724]: 2025-11-29 08:59:47.849 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:47 np0005539552 nova_compute[233724]: 2025-11-29 08:59:47.881 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:48.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:50.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:52.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:52 np0005539552 nova_compute[233724]: 2025-11-29 08:59:52.852 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:52 np0005539552 nova_compute[233724]: 2025-11-29 08:59:52.883 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:53.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:57 np0005539552 nova_compute[233724]: 2025-11-29 08:59:57.854 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:57 np0005539552 nova_compute[233724]: 2025-11-29 08:59:57.885 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 03:59:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539552 ceph-mon[77121]: overall HEALTH_OK
Nov 29 04:00:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:00 np0005539552 nova_compute[233724]: 2025-11-29 09:00:00.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.029 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.030 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.030 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.031 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.031 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3808977757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.543 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.694 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.695 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4092MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.695 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.695 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.760 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.760 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:00:01 np0005539552 nova_compute[233724]: 2025-11-29 09:00:01.772 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:01.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1608190485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.246 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.252 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.267 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.268 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.269 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.857 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:02 np0005539552 nova_compute[233724]: 2025-11-29 09:00:02.887 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:05.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:07 np0005539552 nova_compute[233724]: 2025-11-29 09:00:07.270 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:07 np0005539552 nova_compute[233724]: 2025-11-29 09:00:07.270 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:07 np0005539552 nova_compute[233724]: 2025-11-29 09:00:07.271 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:00:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:07.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:07 np0005539552 nova_compute[233724]: 2025-11-29 09:00:07.860 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:07 np0005539552 nova_compute[233724]: 2025-11-29 09:00:07.888 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:07 np0005539552 podman[337249]: 2025-11-29 09:00:07.967879354 +0000 UTC m=+0.049204424 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:00:08 np0005539552 podman[337248]: 2025-11-29 09:00:08.000013478 +0000 UTC m=+0.083993720 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 04:00:08 np0005539552 podman[337250]: 2025-11-29 09:00:08.001371805 +0000 UTC m=+0.072474720 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 04:00:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:08 np0005539552 nova_compute[233724]: 2025-11-29 09:00:08.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:11.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:12 np0005539552 nova_compute[233724]: 2025-11-29 09:00:12.862 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:12 np0005539552 nova_compute[233724]: 2025-11-29 09:00:12.892 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:12 np0005539552 nova_compute[233724]: 2025-11-29 09:00:12.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:12 np0005539552 nova_compute[233724]: 2025-11-29 09:00:12.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:13.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:15.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:15 np0005539552 nova_compute[233724]: 2025-11-29 09:00:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:15 np0005539552 nova_compute[233724]: 2025-11-29 09:00:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:00:15 np0005539552 nova_compute[233724]: 2025-11-29 09:00:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:00:15 np0005539552 nova_compute[233724]: 2025-11-29 09:00:15.938 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:00:15 np0005539552 nova_compute[233724]: 2025-11-29 09:00:15.938 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:16.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:17 np0005539552 nova_compute[233724]: 2025-11-29 09:00:17.864 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539552 nova_compute[233724]: 2025-11-29 09:00:17.896 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539552 nova_compute[233724]: 2025-11-29 09:00:17.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:18 np0005539552 auditd[702]: Audit daemon rotating log files
Nov 29 04:00:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:19.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:00:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:00:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:20.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:00:20.671 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:00:20.672 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:00:20.672 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:22.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:22 np0005539552 nova_compute[233724]: 2025-11-29 09:00:22.866 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:22 np0005539552 nova_compute[233724]: 2025-11-29 09:00:22.899 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:24.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:26.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:26 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:27 np0005539552 nova_compute[233724]: 2025-11-29 09:00:27.869 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:27 np0005539552 nova_compute[233724]: 2025-11-29 09:00:27.901 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:30.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:31.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:32 np0005539552 nova_compute[233724]: 2025-11-29 09:00:32.871 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:32 np0005539552 nova_compute[233724]: 2025-11-29 09:00:32.903 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:34 np0005539552 nova_compute[233724]: 2025-11-29 09:00:34.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:36.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:37 np0005539552 nova_compute[233724]: 2025-11-29 09:00:37.874 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:37 np0005539552 nova_compute[233724]: 2025-11-29 09:00:37.904 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:38.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:38 np0005539552 nova_compute[233724]: 2025-11-29 09:00:38.939 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:38 np0005539552 nova_compute[233724]: 2025-11-29 09:00:38.939 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:00:38 np0005539552 podman[337563]: 2025-11-29 09:00:38.986352126 +0000 UTC m=+0.069824829 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:38 np0005539552 podman[337561]: 2025-11-29 09:00:38.995744669 +0000 UTC m=+0.074244638 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:00:39 np0005539552 podman[337564]: 2025-11-29 09:00:39.042363783 +0000 UTC m=+0.112832236 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:00:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:00:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571236471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:00:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:00:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571236471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:00:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:42.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:42 np0005539552 nova_compute[233724]: 2025-11-29 09:00:42.878 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:42 np0005539552 nova_compute[233724]: 2025-11-29 09:00:42.905 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:43.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:44.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:44 np0005539552 ceph-mgr[77480]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 04:00:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:45.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:47 np0005539552 nova_compute[233724]: 2025-11-29 09:00:47.881 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:47 np0005539552 nova_compute[233724]: 2025-11-29 09:00:47.907 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:47.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:48.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:49.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:50.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:51.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:52.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:52 np0005539552 nova_compute[233724]: 2025-11-29 09:00:52.884 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:52 np0005539552 nova_compute[233724]: 2025-11-29 09:00:52.910 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:53.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:54.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:55.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:56.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:57 np0005539552 nova_compute[233724]: 2025-11-29 09:00:57.887 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:57 np0005539552 nova_compute[233724]: 2025-11-29 09:00:57.913 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:57.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:58.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:00:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:00.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:01:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:01:01 np0005539552 nova_compute[233724]: 2025-11-29 09:01:01.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:01 np0005539552 nova_compute[233724]: 2025-11-29 09:01:01.990 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:01 np0005539552 nova_compute[233724]: 2025-11-29 09:01:01.991 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:01 np0005539552 nova_compute[233724]: 2025-11-29 09:01:01.992 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:01 np0005539552 nova_compute[233724]: 2025-11-29 09:01:01.992 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:01:01 np0005539552 nova_compute[233724]: 2025-11-29 09:01:01.993 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3612330712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.484 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.707 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.709 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4088MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.709 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.709 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.890 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.914 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.945 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:01:02 np0005539552 nova_compute[233724]: 2025-11-29 09:01:02.995 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/867894359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:03 np0005539552 nova_compute[233724]: 2025-11-29 09:01:03.447 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:03 np0005539552 nova_compute[233724]: 2025-11-29 09:01:03.454 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:03 np0005539552 nova_compute[233724]: 2025-11-29 09:01:03.501 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:03 np0005539552 nova_compute[233724]: 2025-11-29 09:01:03.504 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:01:03 np0005539552 nova_compute[233724]: 2025-11-29 09:01:03.504 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:03.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:04.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:07 np0005539552 nova_compute[233724]: 2025-11-29 09:01:07.893 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:07 np0005539552 nova_compute[233724]: 2025-11-29 09:01:07.916 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:07.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:08.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:08 np0005539552 nova_compute[233724]: 2025-11-29 09:01:08.485 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:08 np0005539552 nova_compute[233724]: 2025-11-29 09:01:08.485 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:08 np0005539552 nova_compute[233724]: 2025-11-29 09:01:08.486 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:01:09 np0005539552 nova_compute[233724]: 2025-11-29 09:01:09.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:09.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:09 np0005539552 podman[337801]: 2025-11-29 09:01:09.986490574 +0000 UTC m=+0.062141613 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:10 np0005539552 podman[337800]: 2025-11-29 09:01:10.015554075 +0000 UTC m=+0.090615168 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:01:10 np0005539552 podman[337802]: 2025-11-29 09:01:10.074710486 +0000 UTC m=+0.137598201 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 04:01:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:10.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:10 np0005539552 nova_compute[233724]: 2025-11-29 09:01:10.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:10 np0005539552 nova_compute[233724]: 2025-11-29 09:01:10.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:01:10 np0005539552 nova_compute[233724]: 2025-11-29 09:01:10.954 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:01:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:11.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:12.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:12 np0005539552 nova_compute[233724]: 2025-11-29 09:01:12.897 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:12 np0005539552 nova_compute[233724]: 2025-11-29 09:01:12.919 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:12 np0005539552 nova_compute[233724]: 2025-11-29 09:01:12.953 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:13 np0005539552 nova_compute[233724]: 2025-11-29 09:01:13.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:14.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:15 np0005539552 nova_compute[233724]: 2025-11-29 09:01:15.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:15 np0005539552 nova_compute[233724]: 2025-11-29 09:01:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:01:15 np0005539552 nova_compute[233724]: 2025-11-29 09:01:15.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:01:15 np0005539552 nova_compute[233724]: 2025-11-29 09:01:15.945 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:01:15 np0005539552 nova_compute[233724]: 2025-11-29 09:01:15.945 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:17 np0005539552 nova_compute[233724]: 2025-11-29 09:01:17.898 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:17 np0005539552 nova_compute[233724]: 2025-11-29 09:01:17.920 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:17.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:18.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:18 np0005539552 nova_compute[233724]: 2025-11-29 09:01:18.940 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:19.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:20.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:01:20.671 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:01:20.672 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:01:20.672 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:21.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:22 np0005539552 nova_compute[233724]: 2025-11-29 09:01:22.902 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:22 np0005539552 nova_compute[233724]: 2025-11-29 09:01:22.922 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:23.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:24.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:25.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:26 np0005539552 podman[338097]: 2025-11-29 09:01:26.639523755 +0000 UTC m=+0.072128761 container exec 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:01:26 np0005539552 podman[338097]: 2025-11-29 09:01:26.760016766 +0000 UTC m=+0.192621712 container exec_died 2e449e743ed8b622f7d948671af16b72c407124db8f8583030f534a9757aa05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-2, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:01:27 np0005539552 podman[338252]: 2025-11-29 09:01:27.708918817 +0000 UTC m=+0.087352750 container exec fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 04:01:27 np0005539552 podman[338252]: 2025-11-29 09:01:27.750217718 +0000 UTC m=+0.128651561 container exec_died fbc97e64df6fe35a26c07fd8827c68b935db0d4237087be449d0fd015b40e005 (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-2-efzvmt)
Nov 29 04:01:27 np0005539552 nova_compute[233724]: 2025-11-29 09:01:27.903 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:27 np0005539552 nova_compute[233724]: 2025-11-29 09:01:27.923 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:27.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:28 np0005539552 podman[338319]: 2025-11-29 09:01:28.046306232 +0000 UTC m=+0.072823130 container exec 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, release=1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 04:01:28 np0005539552 podman[338319]: 2025-11-29 09:01:28.065911869 +0000 UTC m=+0.092428727 container exec_died 6c20d6486ea31b9565e62504d583e2d8c9512b707e8e2015e3cef66c15f3b3e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-2-gntzbr, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Nov 29 04:01:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:28.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:28 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:01:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:29 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:32 np0005539552 nova_compute[233724]: 2025-11-29 09:01:32.907 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:32 np0005539552 nova_compute[233724]: 2025-11-29 09:01:32.927 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:34.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:35 np0005539552 nova_compute[233724]: 2025-11-29 09:01:35.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:36 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Nov 29 04:01:36 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:36.978230) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:01:36 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Nov 29 04:01:36 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406896978302, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2350, "num_deletes": 251, "total_data_size": 5884774, "memory_usage": 5959984, "flush_reason": "Manual Compaction"}
Nov 29 04:01:36 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897005683, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 3828938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79271, "largest_seqno": 81616, "table_properties": {"data_size": 3819374, "index_size": 6057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19260, "raw_average_key_size": 20, "raw_value_size": 3800423, "raw_average_value_size": 4004, "num_data_blocks": 265, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406681, "oldest_key_time": 1764406681, "file_creation_time": 1764406896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 27650 microseconds, and 8990 cpu microseconds.
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.005871) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 3828938 bytes OK
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.005906) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.007909) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.007952) EVENT_LOG_v1 {"time_micros": 1764406897007944, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.007976) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 5874493, prev total WAL file size 5874493, number of live WAL files 2.
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.009265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(3739KB)], [162(11MB)]
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897009329, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 16385259, "oldest_snapshot_seqno": -1}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 11357 keys, 14379618 bytes, temperature: kUnknown
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897133344, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14379618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14306731, "index_size": 43398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 299804, "raw_average_key_size": 26, "raw_value_size": 14107978, "raw_average_value_size": 1242, "num_data_blocks": 1650, "num_entries": 11357, "num_filter_entries": 11357, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764406897, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.133566) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14379618 bytes
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.135399) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.3 rd, 116.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11876, records dropped: 519 output_compression: NoCompression
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.135413) EVENT_LOG_v1 {"time_micros": 1764406897135407, "job": 104, "event": "compaction_finished", "compaction_time_micros": 123893, "compaction_time_cpu_micros": 43852, "output_level": 6, "num_output_files": 1, "total_output_size": 14379618, "num_input_records": 11876, "num_output_records": 11357, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897136057, "job": 104, "event": "table_file_deletion", "file_number": 164}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897138009, "job": 104, "event": "table_file_deletion", "file_number": 162}
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.009206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.138183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.138190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.138192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.138195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:01:37.138197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539552 nova_compute[233724]: 2025-11-29 09:01:37.909 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:37 np0005539552 nova_compute[233724]: 2025-11-29 09:01:37.928 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:01:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3898665479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:01:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:01:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3898665479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:01:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:39.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:40.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:40 np0005539552 podman[338564]: 2025-11-29 09:01:40.411451127 +0000 UTC m=+0.076360985 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 04:01:40 np0005539552 podman[338563]: 2025-11-29 09:01:40.442473331 +0000 UTC m=+0.106955787 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 04:01:40 np0005539552 podman[338565]: 2025-11-29 09:01:40.442443571 +0000 UTC m=+0.108380876 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 04:01:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:41.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:42.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:42 np0005539552 nova_compute[233724]: 2025-11-29 09:01:42.910 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:42 np0005539552 nova_compute[233724]: 2025-11-29 09:01:42.929 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:43.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:46.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:47 np0005539552 nova_compute[233724]: 2025-11-29 09:01:47.912 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:47 np0005539552 nova_compute[233724]: 2025-11-29 09:01:47.932 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:48.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:50.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:52.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:52 np0005539552 nova_compute[233724]: 2025-11-29 09:01:52.915 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:52 np0005539552 nova_compute[233724]: 2025-11-29 09:01:52.934 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:56.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:57 np0005539552 nova_compute[233724]: 2025-11-29 09:01:57.918 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:57 np0005539552 nova_compute[233724]: 2025-11-29 09:01:57.937 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:58.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:01:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:00.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:01 np0005539552 nova_compute[233724]: 2025-11-29 09:02:01.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:01 np0005539552 nova_compute[233724]: 2025-11-29 09:02:01.946 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:01 np0005539552 nova_compute[233724]: 2025-11-29 09:02:01.946 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:01 np0005539552 nova_compute[233724]: 2025-11-29 09:02:01.946 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:01 np0005539552 nova_compute[233724]: 2025-11-29 09:02:01.946 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:02:01 np0005539552 nova_compute[233724]: 2025-11-29 09:02:01.947 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:02.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2530900665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.371 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.554 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.555 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4085MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.556 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.556 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.642 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.642 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.663 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.681 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.682 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.705 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.730 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.746 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.920 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:02 np0005539552 nova_compute[233724]: 2025-11-29 09:02:02.936 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3095606248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:03 np0005539552 nova_compute[233724]: 2025-11-29 09:02:03.187 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:03 np0005539552 nova_compute[233724]: 2025-11-29 09:02:03.193 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:03 np0005539552 nova_compute[233724]: 2025-11-29 09:02:03.221 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:03 np0005539552 nova_compute[233724]: 2025-11-29 09:02:03.223 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:02:03 np0005539552 nova_compute[233724]: 2025-11-29 09:02:03.224 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:07 np0005539552 nova_compute[233724]: 2025-11-29 09:02:07.923 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:07 np0005539552 nova_compute[233724]: 2025-11-29 09:02:07.938 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:09 np0005539552 nova_compute[233724]: 2025-11-29 09:02:09.225 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539552 nova_compute[233724]: 2025-11-29 09:02:09.225 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539552 nova_compute[233724]: 2025-11-29 09:02:09.226 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:02:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:02:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:02:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:10.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:10 np0005539552 nova_compute[233724]: 2025-11-29 09:02:10.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:11 np0005539552 podman[338766]: 2025-11-29 09:02:11.007724854 +0000 UTC m=+0.084153544 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 04:02:11 np0005539552 podman[338764]: 2025-11-29 09:02:11.010592521 +0000 UTC m=+0.085917722 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 04:02:11 np0005539552 podman[338767]: 2025-11-29 09:02:11.04550663 +0000 UTC m=+0.117443679 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 04:02:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:12.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:12 np0005539552 nova_compute[233724]: 2025-11-29 09:02:12.925 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:12 np0005539552 nova_compute[233724]: 2025-11-29 09:02:12.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:13 np0005539552 nova_compute[233724]: 2025-11-29 09:02:13.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:14.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:14.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:14 np0005539552 nova_compute[233724]: 2025-11-29 09:02:14.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:15 np0005539552 nova_compute[233724]: 2025-11-29 09:02:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:16.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:16 np0005539552 nova_compute[233724]: 2025-11-29 09:02:16.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:16 np0005539552 nova_compute[233724]: 2025-11-29 09:02:16.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:02:16 np0005539552 nova_compute[233724]: 2025-11-29 09:02:16.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:02:16 np0005539552 nova_compute[233724]: 2025-11-29 09:02:16.941 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:02:17 np0005539552 nova_compute[233724]: 2025-11-29 09:02:17.929 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:17 np0005539552 nova_compute[233724]: 2025-11-29 09:02:17.942 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:18.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:18.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:18 np0005539552 nova_compute[233724]: 2025-11-29 09:02:18.935 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:20.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:02:20.672 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:02:20.673 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:02:20.673 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:22.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:22 np0005539552 nova_compute[233724]: 2025-11-29 09:02:22.930 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:22 np0005539552 nova_compute[233724]: 2025-11-29 09:02:22.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:24.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:26.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:27 np0005539552 nova_compute[233724]: 2025-11-29 09:02:27.933 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:27 np0005539552 nova_compute[233724]: 2025-11-29 09:02:27.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:28.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:28.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:30.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:32.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:32.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:32 np0005539552 nova_compute[233724]: 2025-11-29 09:02:32.936 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:32 np0005539552 nova_compute[233724]: 2025-11-29 09:02:32.951 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:34.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:34.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:36.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:37 np0005539552 nova_compute[233724]: 2025-11-29 09:02:37.938 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:37 np0005539552 nova_compute[233724]: 2025-11-29 09:02:37.951 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:38.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:02:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1041180484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:02:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:02:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1041180484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:02:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:40 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:40.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:41 np0005539552 podman[339078]: 2025-11-29 09:02:41.152958541 +0000 UTC m=+0.089701993 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:02:41 np0005539552 podman[339077]: 2025-11-29 09:02:41.187891461 +0000 UTC m=+0.128121407 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:02:41 np0005539552 podman[339115]: 2025-11-29 09:02:41.299055741 +0000 UTC m=+0.113530355 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 04:02:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:42.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:42 np0005539552 nova_compute[233724]: 2025-11-29 09:02:42.940 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:42 np0005539552 nova_compute[233724]: 2025-11-29 09:02:42.954 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:44.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:44.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:46.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:46.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:47 np0005539552 nova_compute[233724]: 2025-11-29 09:02:47.943 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:47 np0005539552 nova_compute[233724]: 2025-11-29 09:02:47.956 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:48.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:50.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:52.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:52 np0005539552 nova_compute[233724]: 2025-11-29 09:02:52.946 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539552 nova_compute[233724]: 2025-11-29 09:02:52.959 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:54.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:54.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:56.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:57 np0005539552 nova_compute[233724]: 2025-11-29 09:02:57.949 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:57 np0005539552 nova_compute[233724]: 2025-11-29 09:02:57.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:02:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:00.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:02.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:02 np0005539552 nova_compute[233724]: 2025-11-29 09:03:02.952 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:02 np0005539552 nova_compute[233724]: 2025-11-29 09:03:02.964 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:03 np0005539552 nova_compute[233724]: 2025-11-29 09:03:03.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:03 np0005539552 nova_compute[233724]: 2025-11-29 09:03:03.957 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:03 np0005539552 nova_compute[233724]: 2025-11-29 09:03:03.958 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:03 np0005539552 nova_compute[233724]: 2025-11-29 09:03:03.958 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:03 np0005539552 nova_compute[233724]: 2025-11-29 09:03:03.958 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:03:03 np0005539552 nova_compute[233724]: 2025-11-29 09:03:03.959 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:04.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:04.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/925124862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.437 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.672 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.673 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4096MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.673 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.674 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.950 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:03:04 np0005539552 nova_compute[233724]: 2025-11-29 09:03:04.951 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:03:05 np0005539552 nova_compute[233724]: 2025-11-29 09:03:05.058 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/967275674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:05 np0005539552 nova_compute[233724]: 2025-11-29 09:03:05.503 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:05 np0005539552 nova_compute[233724]: 2025-11-29 09:03:05.511 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:03:05 np0005539552 nova_compute[233724]: 2025-11-29 09:03:05.532 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:03:05 np0005539552 nova_compute[233724]: 2025-11-29 09:03:05.535 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:03:05 np0005539552 nova_compute[233724]: 2025-11-29 09:03:05.536 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:07 np0005539552 nova_compute[233724]: 2025-11-29 09:03:07.955 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:07 np0005539552 nova_compute[233724]: 2025-11-29 09:03:07.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:08.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:10.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:10 np0005539552 nova_compute[233724]: 2025-11-29 09:03:10.537 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:10 np0005539552 nova_compute[233724]: 2025-11-29 09:03:10.537 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:10 np0005539552 nova_compute[233724]: 2025-11-29 09:03:10.537 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:03:10 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:11 np0005539552 nova_compute[233724]: 2025-11-29 09:03:11.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:12 np0005539552 podman[339304]: 2025-11-29 09:03:12.019217687 +0000 UTC m=+0.091340557 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:03:12 np0005539552 podman[339303]: 2025-11-29 09:03:12.025502106 +0000 UTC m=+0.101946953 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:03:12 np0005539552 podman[339305]: 2025-11-29 09:03:12.067734142 +0000 UTC m=+0.135479685 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:03:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:12 np0005539552 nova_compute[233724]: 2025-11-29 09:03:12.958 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:12 np0005539552 nova_compute[233724]: 2025-11-29 09:03:12.971 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:13 np0005539552 nova_compute[233724]: 2025-11-29 09:03:13.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:14.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:15 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:15 np0005539552 nova_compute[233724]: 2025-11-29 09:03:15.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:16.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:03:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:16.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:03:16 np0005539552 nova_compute[233724]: 2025-11-29 09:03:16.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:16 np0005539552 nova_compute[233724]: 2025-11-29 09:03:16.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:03:16 np0005539552 nova_compute[233724]: 2025-11-29 09:03:16.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:03:16 np0005539552 nova_compute[233724]: 2025-11-29 09:03:16.943 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:03:16 np0005539552 nova_compute[233724]: 2025-11-29 09:03:16.944 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:17 np0005539552 nova_compute[233724]: 2025-11-29 09:03:17.961 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:17 np0005539552 nova_compute[233724]: 2025-11-29 09:03:17.971 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:18.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:18.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:18 np0005539552 nova_compute[233724]: 2025-11-29 09:03:18.938 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:20.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:03:20.673 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:03:20.674 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:03:20.674 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:03:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:22.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:03:22 np0005539552 nova_compute[233724]: 2025-11-29 09:03:22.963 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:22 np0005539552 nova_compute[233724]: 2025-11-29 09:03:22.973 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:24.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:25 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:26.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:26.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:27 np0005539552 nova_compute[233724]: 2025-11-29 09:03:27.966 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:27 np0005539552 nova_compute[233724]: 2025-11-29 09:03:27.975 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:28.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:30.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:30 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.123533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012123564, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1303, "num_deletes": 259, "total_data_size": 2948454, "memory_usage": 2988768, "flush_reason": "Manual Compaction"}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012140906, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1946407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81621, "largest_seqno": 82919, "table_properties": {"data_size": 1940819, "index_size": 2982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11788, "raw_average_key_size": 19, "raw_value_size": 1929555, "raw_average_value_size": 3178, "num_data_blocks": 133, "num_entries": 607, "num_filter_entries": 607, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406898, "oldest_key_time": 1764406898, "file_creation_time": 1764407012, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 17415 microseconds, and 6745 cpu microseconds.
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.140947) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1946407 bytes OK
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.140965) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.142704) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.142718) EVENT_LOG_v1 {"time_micros": 1764407012142713, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.142736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2942302, prev total WAL file size 2942302, number of live WAL files 2.
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.143692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303135' seq:72057594037927935, type:22 .. '6C6F676D0033323730' seq:0, type:0; will stop at (end)
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1900KB)], [165(13MB)]
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012143756, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 16326025, "oldest_snapshot_seqno": -1}
Nov 29 04:03:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:32.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 11433 keys, 16184833 bytes, temperature: kUnknown
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012300322, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 16184833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16109346, "index_size": 45806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 302311, "raw_average_key_size": 26, "raw_value_size": 15907162, "raw_average_value_size": 1391, "num_data_blocks": 1751, "num_entries": 11433, "num_filter_entries": 11433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764407012, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.300735) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 16184833 bytes
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.302557) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.2 rd, 103.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.7 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(16.7) write-amplify(8.3) OK, records in: 11964, records dropped: 531 output_compression: NoCompression
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.302586) EVENT_LOG_v1 {"time_micros": 1764407012302573, "job": 106, "event": "compaction_finished", "compaction_time_micros": 156662, "compaction_time_cpu_micros": 66166, "output_level": 6, "num_output_files": 1, "total_output_size": 16184833, "num_input_records": 11964, "num_output_records": 11433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012303369, "job": 106, "event": "table_file_deletion", "file_number": 167}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012307918, "job": 106, "event": "table_file_deletion", "file_number": 165}
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.143561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.308029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.308033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.308035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.308037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:32.308039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:32 np0005539552 nova_compute[233724]: 2025-11-29 09:03:32.968 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:32 np0005539552 nova_compute[233724]: 2025-11-29 09:03:32.977 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:34.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:35 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:36.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:36.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:37 np0005539552 nova_compute[233724]: 2025-11-29 09:03:37.970 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:37 np0005539552 nova_compute[233724]: 2025-11-29 09:03:37.979 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:38.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:38.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:38 np0005539552 nova_compute[233724]: 2025-11-29 09:03:38.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:03:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317442418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:03:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:03:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317442418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:03:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:40.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:40 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:42.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:42 np0005539552 nova_compute[233724]: 2025-11-29 09:03:42.973 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:42 np0005539552 nova_compute[233724]: 2025-11-29 09:03:42.979 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:43 np0005539552 podman[339485]: 2025-11-29 09:03:43.004832365 +0000 UTC m=+0.087621128 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 04:03:43 np0005539552 podman[339486]: 2025-11-29 09:03:43.013368364 +0000 UTC m=+0.097476862 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 04:03:43 np0005539552 podman[339487]: 2025-11-29 09:03:43.038081929 +0000 UTC m=+0.125623630 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:03:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:44.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:44.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:46.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:47 np0005539552 nova_compute[233724]: 2025-11-29 09:03:47.975 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:47 np0005539552 nova_compute[233724]: 2025-11-29 09:03:47.982 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:48.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:48.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:50.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:50.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:03:51 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.024569) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032024692, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 451, "num_deletes": 250, "total_data_size": 523098, "memory_usage": 530920, "flush_reason": "Manual Compaction"}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032029493, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 283158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82924, "largest_seqno": 83370, "table_properties": {"data_size": 280773, "index_size": 484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6618, "raw_average_key_size": 20, "raw_value_size": 275863, "raw_average_value_size": 854, "num_data_blocks": 22, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407012, "oldest_key_time": 1764407012, "file_creation_time": 1764407032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 5077 microseconds, and 2090 cpu microseconds.
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.029655) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 283158 bytes OK
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.029716) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.031433) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.031455) EVENT_LOG_v1 {"time_micros": 1764407032031448, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.031475) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 520303, prev total WAL file size 520303, number of live WAL files 2.
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.032708) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373536' seq:72057594037927935, type:22 .. '6D6772737461740033303037' seq:0, type:0; will stop at (end)
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(276KB)], [168(15MB)]
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032032794, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 16467991, "oldest_snapshot_seqno": -1}
Nov 29 04:03:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:52.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:52.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 11249 keys, 12694867 bytes, temperature: kUnknown
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032628005, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 12694867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12625253, "index_size": 40381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28165, "raw_key_size": 298689, "raw_average_key_size": 26, "raw_value_size": 12430888, "raw_average_value_size": 1105, "num_data_blocks": 1525, "num_entries": 11249, "num_filter_entries": 11249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764407032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.628325) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 12694867 bytes
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.630769) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.7 rd, 21.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.4 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(103.0) write-amplify(44.8) OK, records in: 11756, records dropped: 507 output_compression: NoCompression
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.630787) EVENT_LOG_v1 {"time_micros": 1764407032630779, "job": 108, "event": "compaction_finished", "compaction_time_micros": 595293, "compaction_time_cpu_micros": 40644, "output_level": 6, "num_output_files": 1, "total_output_size": 12694867, "num_input_records": 11756, "num_output_records": 11249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032630957, "job": 108, "event": "table_file_deletion", "file_number": 170}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032634023, "job": 108, "event": "table_file_deletion", "file_number": 168}
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.032566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.634111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.634117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.634119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.634121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:03:52.634124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539552 nova_compute[233724]: 2025-11-29 09:03:52.978 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:52 np0005539552 nova_compute[233724]: 2025-11-29 09:03:52.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:54.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965576f0 =====
Nov 29 04:03:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965576f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:56 np0005539552 radosgw[83248]: beast: 0x7fec965576f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:57 np0005539552 nova_compute[233724]: 2025-11-29 09:03:57.981 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:57 np0005539552 nova_compute[233724]: 2025-11-29 09:03:57.986 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:03:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:58.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:00.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:02 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:02.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:02 np0005539552 nova_compute[233724]: 2025-11-29 09:04:02.983 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:02 np0005539552 nova_compute[233724]: 2025-11-29 09:04:02.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:04.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:04 np0005539552 nova_compute[233724]: 2025-11-29 09:04:04.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:04 np0005539552 nova_compute[233724]: 2025-11-29 09:04:04.983 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:04 np0005539552 nova_compute[233724]: 2025-11-29 09:04:04.984 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:04 np0005539552 nova_compute[233724]: 2025-11-29 09:04:04.984 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:04 np0005539552 nova_compute[233724]: 2025-11-29 09:04:04.984 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:04:04 np0005539552 nova_compute[233724]: 2025-11-29 09:04:04.984 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2412378632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:05 np0005539552 nova_compute[233724]: 2025-11-29 09:04:05.802 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.818s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.057 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.060 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4088MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.061 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.062 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.855 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.855 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:04:06 np0005539552 nova_compute[233724]: 2025-11-29 09:04:06.906 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:07 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/228918309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.349 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.354 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.432 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.434 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.434 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.985 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:07 np0005539552 nova_compute[233724]: 2025-11-29 09:04:07.991 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:08.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:11 np0005539552 nova_compute[233724]: 2025-11-29 09:04:11.434 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:11 np0005539552 nova_compute[233724]: 2025-11-29 09:04:11.435 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:11 np0005539552 nova_compute[233724]: 2025-11-29 09:04:11.435 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:04:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:12.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:12.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:12 np0005539552 nova_compute[233724]: 2025-11-29 09:04:12.988 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:12 np0005539552 nova_compute[233724]: 2025-11-29 09:04:12.992 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:13 np0005539552 nova_compute[233724]: 2025-11-29 09:04:13.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:13 np0005539552 nova_compute[233724]: 2025-11-29 09:04:13.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:13 np0005539552 podman[339845]: 2025-11-29 09:04:13.974818544 +0000 UTC m=+0.063011276 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 04:04:13 np0005539552 podman[339846]: 2025-11-29 09:04:13.97541046 +0000 UTC m=+0.061409813 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 04:04:14 np0005539552 podman[339847]: 2025-11-29 09:04:14.001397229 +0000 UTC m=+0.088188193 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:04:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:14.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:16.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:16 np0005539552 nova_compute[233724]: 2025-11-29 09:04:16.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.939 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.940 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.989 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:17 np0005539552 nova_compute[233724]: 2025-11-29 09:04:17.993 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:18.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:20.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:04:20.675 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:04:20.675 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:04:20.675 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:20.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:20 np0005539552 nova_compute[233724]: 2025-11-29 09:04:20.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:22.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:22.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:22 np0005539552 nova_compute[233724]: 2025-11-29 09:04:22.992 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:22 np0005539552 nova_compute[233724]: 2025-11-29 09:04:22.995 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:24.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:26.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:27 np0005539552 nova_compute[233724]: 2025-11-29 09:04:27.994 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:27 np0005539552 nova_compute[233724]: 2025-11-29 09:04:27.996 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:28.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:28.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:30.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:32 np0005539552 nova_compute[233724]: 2025-11-29 09:04:32.997 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:33 np0005539552 nova_compute[233724]: 2025-11-29 09:04:32.999 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:36.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:38 np0005539552 nova_compute[233724]: 2025-11-29 09:04:38.000 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:38.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:38.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:04:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/330214179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:04:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:04:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/330214179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:04:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:40.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:42.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:42.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:43 np0005539552 nova_compute[233724]: 2025-11-29 09:04:43.002 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:04:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:44.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:44 np0005539552 podman[340027]: 2025-11-29 09:04:44.991412784 +0000 UTC m=+0.071452383 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 04:04:45 np0005539552 podman[340029]: 2025-11-29 09:04:45.014077323 +0000 UTC m=+0.076746575 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 04:04:45 np0005539552 podman[340035]: 2025-11-29 09:04:45.076696987 +0000 UTC m=+0.134702953 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 04:04:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:46.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:48 np0005539552 nova_compute[233724]: 2025-11-29 09:04:48.004 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:04:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:52.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:53 np0005539552 nova_compute[233724]: 2025-11-29 09:04:53.006 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:04:53 np0005539552 nova_compute[233724]: 2025-11-29 09:04:53.007 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:53 np0005539552 nova_compute[233724]: 2025-11-29 09:04:53.008 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:04:53 np0005539552 nova_compute[233724]: 2025-11-29 09:04:53.008 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:04:53 np0005539552 nova_compute[233724]: 2025-11-29 09:04:53.008 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:04:53 np0005539552 nova_compute[233724]: 2025-11-29 09:04:53.009 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:54.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:58 np0005539552 nova_compute[233724]: 2025-11-29 09:04:58.009 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:58 np0005539552 nova_compute[233724]: 2025-11-29 09:04:58.010 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:58.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:04:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:00.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:02.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:03 np0005539552 nova_compute[233724]: 2025-11-29 09:05:03.012 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:03 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:04.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:05 np0005539552 nova_compute[233724]: 2025-11-29 09:05:05.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:05 np0005539552 nova_compute[233724]: 2025-11-29 09:05:05.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:05 np0005539552 nova_compute[233724]: 2025-11-29 09:05:05.955 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:05 np0005539552 nova_compute[233724]: 2025-11-29 09:05:05.956 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:05 np0005539552 nova_compute[233724]: 2025-11-29 09:05:05.956 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:05:05 np0005539552 nova_compute[233724]: 2025-11-29 09:05:05.956 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4127895751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.406 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.563 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.564 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4084MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.565 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.565 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:06.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:06.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.786 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.786 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:05:06 np0005539552 nova_compute[233724]: 2025-11-29 09:05:06.801 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:07 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1547739287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:07 np0005539552 nova_compute[233724]: 2025-11-29 09:05:07.259 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:07 np0005539552 nova_compute[233724]: 2025-11-29 09:05:07.265 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:07 np0005539552 nova_compute[233724]: 2025-11-29 09:05:07.281 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:07 np0005539552 nova_compute[233724]: 2025-11-29 09:05:07.282 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:05:07 np0005539552 nova_compute[233724]: 2025-11-29 09:05:07.283 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:08 np0005539552 nova_compute[233724]: 2025-11-29 09:05:08.013 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:05:08 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 16K writes, 84K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1451 writes, 7181 keys, 1451 commit groups, 1.0 writes per commit group, ingest: 15.06 MB, 0.03 MB/s#012Interval WAL: 1451 writes, 1451 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.8      3.39              0.35        54    0.063       0      0       0.0       0.0#012  L6      1/0   12.11 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.7     97.3     84.3      7.06              1.89        53    0.133    438K    28K       0.0       0.0#012 Sum      1/0   12.11 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.7     65.8     66.9     10.45              2.24       107    0.098    438K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.3     67.2     65.9      1.30              0.33        12    0.108     69K   3084       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     97.3     84.3      7.06              1.89        53    0.133    438K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     31.2      3.34              0.35        53    0.063       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.1 total, 600.0 interval#012Flush(GB): cumulative 0.102, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.68 GB write, 0.11 MB/s write, 0.67 GB read, 0.10 MB/s read, 10.4 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560bf13bd1f0#2 capacity: 304.00 MB usage: 72.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000355 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4458,69.14 MB,22.7441%) FilterBlock(107,1.19 MB,0.390339%) IndexBlock(107,1.94 MB,0.637878%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:05:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:08.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:10 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:10.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:12 np0005539552 nova_compute[233724]: 2025-11-29 09:05:12.284 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:12 np0005539552 nova_compute[233724]: 2025-11-29 09:05:12.284 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:12 np0005539552 nova_compute[233724]: 2025-11-29 09:05:12.285 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:05:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:12.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:12.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:13 np0005539552 nova_compute[233724]: 2025-11-29 09:05:13.015 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:13 np0005539552 nova_compute[233724]: 2025-11-29 09:05:13.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:14.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:15 np0005539552 nova_compute[233724]: 2025-11-29 09:05:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:15 np0005539552 podman[340389]: 2025-11-29 09:05:15.986674819 +0000 UTC m=+0.061331601 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 04:05:16 np0005539552 podman[340390]: 2025-11-29 09:05:16.02576022 +0000 UTC m=+0.098800378 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:05:16 np0005539552 podman[340388]: 2025-11-29 09:05:16.02575228 +0000 UTC m=+0.098590983 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:16.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:16 np0005539552 nova_compute[233724]: 2025-11-29 09:05:16.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:18 np0005539552 nova_compute[233724]: 2025-11-29 09:05:18.016 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:18 np0005539552 nova_compute[233724]: 2025-11-29 09:05:18.018 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:18.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:18.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.746275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119746368, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1092, "num_deletes": 251, "total_data_size": 2317975, "memory_usage": 2339872, "flush_reason": "Manual Compaction"}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119757407, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 1518411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83375, "largest_seqno": 84462, "table_properties": {"data_size": 1513656, "index_size": 2342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10532, "raw_average_key_size": 19, "raw_value_size": 1503997, "raw_average_value_size": 2811, "num_data_blocks": 104, "num_entries": 535, "num_filter_entries": 535, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407032, "oldest_key_time": 1764407032, "file_creation_time": 1764407119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 11193 microseconds, and 5521 cpu microseconds.
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.757470) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 1518411 bytes OK
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.757504) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.759928) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.759954) EVENT_LOG_v1 {"time_micros": 1764407119759947, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.759975) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2312660, prev total WAL file size 2312660, number of live WAL files 2.
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.761338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(1482KB)], [171(12MB)]
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119761387, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 14213278, "oldest_snapshot_seqno": -1}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 11269 keys, 12138675 bytes, temperature: kUnknown
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119872934, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 12138675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12069743, "index_size": 39635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299759, "raw_average_key_size": 26, "raw_value_size": 11875792, "raw_average_value_size": 1053, "num_data_blocks": 1485, "num_entries": 11269, "num_filter_entries": 11269, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764407119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.873324) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 12138675 bytes
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.875215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(17.4) write-amplify(8.0) OK, records in: 11784, records dropped: 515 output_compression: NoCompression
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.875252) EVENT_LOG_v1 {"time_micros": 1764407119875236, "job": 110, "event": "compaction_finished", "compaction_time_micros": 111649, "compaction_time_cpu_micros": 55289, "output_level": 6, "num_output_files": 1, "total_output_size": 12138675, "num_input_records": 11784, "num_output_records": 11269, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119876011, "job": 110, "event": "table_file_deletion", "file_number": 173}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119880829, "job": 110, "event": "table_file_deletion", "file_number": 171}
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.761207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.880885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.880891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.880895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.880897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:05:19.880899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539552 nova_compute[233724]: 2025-11-29 09:05:19.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:19 np0005539552 nova_compute[233724]: 2025-11-29 09:05:19.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:05:19 np0005539552 nova_compute[233724]: 2025-11-29 09:05:19.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:05:19 np0005539552 nova_compute[233724]: 2025-11-29 09:05:19.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:05:19 np0005539552 nova_compute[233724]: 2025-11-29 09:05:19.940 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:05:20.677 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:05:20.677 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:05:20.677 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:20.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:20.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:21 np0005539552 nova_compute[233724]: 2025-11-29 09:05:21.934 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:22.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:22.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:23 np0005539552 nova_compute[233724]: 2025-11-29 09:05:23.019 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:24.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:26.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:28 np0005539552 nova_compute[233724]: 2025-11-29 09:05:28.020 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:28 np0005539552 nova_compute[233724]: 2025-11-29 09:05:28.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:28 np0005539552 nova_compute[233724]: 2025-11-29 09:05:28.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:05:28 np0005539552 nova_compute[233724]: 2025-11-29 09:05:28.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:05:28 np0005539552 nova_compute[233724]: 2025-11-29 09:05:28.021 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:05:28 np0005539552 nova_compute[233724]: 2025-11-29 09:05:28.023 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:28.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:32.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:33 np0005539552 nova_compute[233724]: 2025-11-29 09:05:33.024 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:36.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:38 np0005539552 nova_compute[233724]: 2025-11-29 09:05:38.025 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:38 np0005539552 nova_compute[233724]: 2025-11-29 09:05:38.027 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:38.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:38 np0005539552 nova_compute[233724]: 2025-11-29 09:05:38.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:05:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384451587' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:05:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:05:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384451587' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:05:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:40.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:40.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:42.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.028 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.029 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.029 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.030 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.030 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:43 np0005539552 nova_compute[233724]: 2025-11-29 09:05:43.958 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:44.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:44.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:44 np0005539552 nova_compute[233724]: 2025-11-29 09:05:44.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:44 np0005539552 nova_compute[233724]: 2025-11-29 09:05:44.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.942 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.943 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.944 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.945 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.962 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.970 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.970 233728 WARNING nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.970 233728 WARNING nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.970 233728 WARNING nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.970 233728 WARNING nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.971 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Removable base files: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.971 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.971 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d8f87e6814a39f74799532642e7be3e998da5505#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.971 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.971 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/18b0193a1678e1adf0aa298b46c4af424203b75c#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.972 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.972 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.972 233728 DEBUG nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 04:05:45 np0005539552 nova_compute[233724]: 2025-11-29 09:05:45.972 233728 INFO nova.virt.libvirt.imagecache [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 29 04:05:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:46.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:47 np0005539552 podman[340567]: 2025-11-29 09:05:47.003389079 +0000 UTC m=+0.093463915 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:05:47 np0005539552 podman[340568]: 2025-11-29 09:05:47.019589015 +0000 UTC m=+0.107588795 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:05:47 np0005539552 podman[340569]: 2025-11-29 09:05:47.033415246 +0000 UTC m=+0.113212925 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 04:05:48 np0005539552 nova_compute[233724]: 2025-11-29 09:05:48.030 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:48 np0005539552 nova_compute[233724]: 2025-11-29 09:05:48.032 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:48.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:48.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:50.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:50.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:05:51 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.4 total, 600.0 interval#012Cumulative writes: 80K writes, 319K keys, 80K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s#012Cumulative WAL: 80K writes, 30K syncs, 2.68 writes per sync, written: 0.32 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 468 writes, 727 keys, 468 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s#012Interval WAL: 468 writes, 228 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:05:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:52.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:52.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:53 np0005539552 nova_compute[233724]: 2025-11-29 09:05:53.031 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539552 nova_compute[233724]: 2025-11-29 09:05:53.033 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:05:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:54.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:05:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:56.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:58 np0005539552 nova_compute[233724]: 2025-11-29 09:05:58.034 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:05:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:58.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:00.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:02.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:02.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:03 np0005539552 nova_compute[233724]: 2025-11-29 09:06:03.034 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:03 np0005539552 nova_compute[233724]: 2025-11-29 09:06:03.037 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:04.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:04.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:06.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:06.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:07 np0005539552 nova_compute[233724]: 2025-11-29 09:06:07.953 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:07 np0005539552 nova_compute[233724]: 2025-11-29 09:06:07.989 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:07 np0005539552 nova_compute[233724]: 2025-11-29 09:06:07.989 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:07 np0005539552 nova_compute[233724]: 2025-11-29 09:06:07.990 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:07 np0005539552 nova_compute[233724]: 2025-11-29 09:06:07.990 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:06:07 np0005539552 nova_compute[233724]: 2025-11-29 09:06:07.990 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.038 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1860618842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.457 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.631 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.632 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4079MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.632 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.632 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.706 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.706 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:06:08 np0005539552 nova_compute[233724]: 2025-11-29 09:06:08.733 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:08.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:08.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:09 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:09 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/588467193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:09 np0005539552 nova_compute[233724]: 2025-11-29 09:06:09.169 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:09 np0005539552 nova_compute[233724]: 2025-11-29 09:06:09.177 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:09 np0005539552 nova_compute[233724]: 2025-11-29 09:06:09.197 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:06:09 np0005539552 nova_compute[233724]: 2025-11-29 09:06:09.201 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:06:09 np0005539552 nova_compute[233724]: 2025-11-29 09:06:09.201 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:10.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:10.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:11 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:12 np0005539552 nova_compute[233724]: 2025-11-29 09:06:12.172 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:12 np0005539552 nova_compute[233724]: 2025-11-29 09:06:12.173 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:06:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:12.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:12.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:12 np0005539552 nova_compute[233724]: 2025-11-29 09:06:12.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:13 np0005539552 nova_compute[233724]: 2025-11-29 09:06:13.042 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:14.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:15 np0005539552 nova_compute[233724]: 2025-11-29 09:06:15.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:16.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:17 np0005539552 nova_compute[233724]: 2025-11-29 09:06:17.922 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:17 np0005539552 nova_compute[233724]: 2025-11-29 09:06:17.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:17 np0005539552 podman[340873]: 2025-11-29 09:06:17.978390452 +0000 UTC m=+0.060426496 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:06:17 np0005539552 podman[340872]: 2025-11-29 09:06:17.996992063 +0000 UTC m=+0.087061123 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 04:06:18 np0005539552 podman[340874]: 2025-11-29 09:06:18.018545722 +0000 UTC m=+0.099125537 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:06:18 np0005539552 nova_compute[233724]: 2025-11-29 09:06:18.043 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:18 np0005539552 nova_compute[233724]: 2025-11-29 09:06:18.045 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:18.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:18.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:19 np0005539552 nova_compute[233724]: 2025-11-29 09:06:19.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:19 np0005539552 nova_compute[233724]: 2025-11-29 09:06:19.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:06:19 np0005539552 nova_compute[233724]: 2025-11-29 09:06:19.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:06:19 np0005539552 nova_compute[233724]: 2025-11-29 09:06:19.951 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:06:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:20 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:20 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:20 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671369690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:06:20.679 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:06:20.680 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:06:20.680 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 04:06:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:20.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 04:06:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:20 np0005539552 nova_compute[233724]: 2025-11-29 09:06:20.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:22 np0005539552 nova_compute[233724]: 2025-11-29 09:06:22.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:22 np0005539552 nova_compute[233724]: 2025-11-29 09:06:22.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:06:22 np0005539552 nova_compute[233724]: 2025-11-29 09:06:22.940 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:06:23 np0005539552 nova_compute[233724]: 2025-11-29 09:06:23.044 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:23 np0005539552 nova_compute[233724]: 2025-11-29 09:06:23.046 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:23 np0005539552 nova_compute[233724]: 2025-11-29 09:06:23.935 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:24.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:26.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:28 np0005539552 nova_compute[233724]: 2025-11-29 09:06:28.047 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:28.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:28.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:30.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:30.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:32.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:33 np0005539552 nova_compute[233724]: 2025-11-29 09:06:33.049 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:36.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:38 np0005539552 nova_compute[233724]: 2025-11-29 09:06:38.051 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:38 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 04:06:38 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 04:06:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:38.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:38.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:06:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786742018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:06:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:06:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786742018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:06:39 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 04:06:39 np0005539552 radosgw[83248]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 04:06:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:40.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:40.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:42.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:42.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:43 np0005539552 nova_compute[233724]: 2025-11-29 09:06:43.053 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:44.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:46.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:48 np0005539552 nova_compute[233724]: 2025-11-29 09:06:48.055 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:48 np0005539552 podman[341106]: 2025-11-29 09:06:48.964960294 +0000 UTC m=+0.056559753 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 04:06:48 np0005539552 podman[341107]: 2025-11-29 09:06:48.972865606 +0000 UTC m=+0.059226014 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 04:06:49 np0005539552 podman[341108]: 2025-11-29 09:06:49.00236812 +0000 UTC m=+0.083455946 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 04:06:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:50.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:51 np0005539552 nova_compute[233724]: 2025-11-29 09:06:51.772 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:52.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:53 np0005539552 nova_compute[233724]: 2025-11-29 09:06:53.056 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:53 np0005539552 nova_compute[233724]: 2025-11-29 09:06:53.058 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:54.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:56.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:58 np0005539552 nova_compute[233724]: 2025-11-29 09:06:58.058 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:58.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:06:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:02.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:02.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:03 np0005539552 nova_compute[233724]: 2025-11-29 09:07:03.059 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:06.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:08 np0005539552 nova_compute[233724]: 2025-11-29 09:07:08.061 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:08.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:09 np0005539552 nova_compute[233724]: 2025-11-29 09:07:09.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:10.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:10.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:11 np0005539552 nova_compute[233724]: 2025-11-29 09:07:11.518 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:11 np0005539552 nova_compute[233724]: 2025-11-29 09:07:11.519 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:11 np0005539552 nova_compute[233724]: 2025-11-29 09:07:11.519 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:11 np0005539552 nova_compute[233724]: 2025-11-29 09:07:11.520 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:07:11 np0005539552 nova_compute[233724]: 2025-11-29 09:07:11.521 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:11 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/732315077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:11 np0005539552 nova_compute[233724]: 2025-11-29 09:07:11.958 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.100 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.102 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4081MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.102 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.102 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:12.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.872 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.873 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.892 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing inventories for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.927 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating ProviderTree inventory for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.928 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Updating inventory in ProviderTree for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.958 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing aggregate associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.980 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Refreshing trait associations for resource provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371, traits: HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:07:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:12.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:12 np0005539552 nova_compute[233724]: 2025-11-29 09:07:12.999 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:13 np0005539552 nova_compute[233724]: 2025-11-29 09:07:13.064 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:13 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:13 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1086158134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:13 np0005539552 nova_compute[233724]: 2025-11-29 09:07:13.489 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:13 np0005539552 nova_compute[233724]: 2025-11-29 09:07:13.497 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:07:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:14.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:14.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:15 np0005539552 nova_compute[233724]: 2025-11-29 09:07:15.198 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:07:15 np0005539552 nova_compute[233724]: 2025-11-29 09:07:15.199 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:07:15 np0005539552 nova_compute[233724]: 2025-11-29 09:07:15.200 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:16 np0005539552 nova_compute[233724]: 2025-11-29 09:07:16.200 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:16 np0005539552 nova_compute[233724]: 2025-11-29 09:07:16.201 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:16 np0005539552 nova_compute[233724]: 2025-11-29 09:07:16.202 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:07:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:16.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:17 np0005539552 nova_compute[233724]: 2025-11-29 09:07:17.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:18 np0005539552 nova_compute[233724]: 2025-11-29 09:07:18.066 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:18.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:18 np0005539552 nova_compute[233724]: 2025-11-29 09:07:18.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:18 np0005539552 nova_compute[233724]: 2025-11-29 09:07:18.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:18.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:19 np0005539552 podman[341305]: 2025-11-29 09:07:19.867676851 +0000 UTC m=+0.063814198 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:07:19 np0005539552 podman[341304]: 2025-11-29 09:07:19.888969993 +0000 UTC m=+0.078722998 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:07:19 np0005539552 podman[341306]: 2025-11-29 09:07:19.903740161 +0000 UTC m=+0.091840561 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:07:19 np0005539552 nova_compute[233724]: 2025-11-29 09:07:19.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:19 np0005539552 nova_compute[233724]: 2025-11-29 09:07:19.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:07:19 np0005539552 nova_compute[233724]: 2025-11-29 09:07:19.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:07:19 np0005539552 nova_compute[233724]: 2025-11-29 09:07:19.952 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:07:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:07:20.679 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:07:20.680 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:07:20.680 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:20.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:20.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:22 np0005539552 nova_compute[233724]: 2025-11-29 09:07:22.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:22 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:22.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:23 np0005539552 nova_compute[233724]: 2025-11-29 09:07:23.068 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539552 nova_compute[233724]: 2025-11-29 09:07:23.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:24 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:24.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:25.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:27.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:28 np0005539552 nova_compute[233724]: 2025-11-29 09:07:28.069 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:28.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:29.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:30.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:32.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:33.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:33 np0005539552 nova_compute[233724]: 2025-11-29 09:07:33.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:34.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:35.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:36.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:36 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:37.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:38 np0005539552 nova_compute[233724]: 2025-11-29 09:07:38.072 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:38 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:38.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:39.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:07:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904963048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:07:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:07:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904963048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:07:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:40.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:41 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:42.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:43.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:43 np0005539552 nova_compute[233724]: 2025-11-29 09:07:43.073 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:44.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:46.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:46 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:47.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:47 np0005539552 nova_compute[233724]: 2025-11-29 09:07:47.918 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:48 np0005539552 nova_compute[233724]: 2025-11-29 09:07:48.075 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:48 np0005539552 nova_compute[233724]: 2025-11-29 09:07:48.076 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:48 np0005539552 nova_compute[233724]: 2025-11-29 09:07:48.077 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:07:48 np0005539552 nova_compute[233724]: 2025-11-29 09:07:48.077 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:48 np0005539552 nova_compute[233724]: 2025-11-29 09:07:48.077 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:07:48 np0005539552 nova_compute[233724]: 2025-11-29 09:07:48.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:48.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:49.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:49 np0005539552 podman[341643]: 2025-11-29 09:07:49.989691809 +0000 UTC m=+0.074353011 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 04:07:49 np0005539552 podman[341642]: 2025-11-29 09:07:49.990176612 +0000 UTC m=+0.075309496 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:07:50 np0005539552 podman[341680]: 2025-11-29 09:07:50.100308584 +0000 UTC m=+0.079078138 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:07:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:50.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:52.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:53.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:53 np0005539552 nova_compute[233724]: 2025-11-29 09:07:53.078 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:54.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:55.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:56.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:57.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:58 np0005539552 nova_compute[233724]: 2025-11-29 09:07:58.081 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:58.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:07:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:00.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:01.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:03.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:03 np0005539552 nova_compute[233724]: 2025-11-29 09:08:03.082 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:04.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:05.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:06.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:07.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:08 np0005539552 nova_compute[233724]: 2025-11-29 09:08:08.085 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:08 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:08 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 04:08:08 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:08.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 04:08:09 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:09 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:09 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:09.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:10 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:10 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:10.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:11 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:11 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:11 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:11.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.923 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.923 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.959 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.960 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.961 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:08:11 np0005539552 nova_compute[233724]: 2025-11-29 09:08:11.961 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:11 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:12 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:12 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1567599661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:12 np0005539552 nova_compute[233724]: 2025-11-29 09:08:12.443 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:12 np0005539552 nova_compute[233724]: 2025-11-29 09:08:12.631 233728 WARNING nova.virt.libvirt.driver [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:08:12 np0005539552 nova_compute[233724]: 2025-11-29 09:08:12.633 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4077MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:08:12 np0005539552 nova_compute[233724]: 2025-11-29 09:08:12.633 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:12 np0005539552 nova_compute[233724]: 2025-11-29 09:08:12.634 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:12 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:12 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:12 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:12.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:13 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:13 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:13 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:13.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:13 np0005539552 nova_compute[233724]: 2025-11-29 09:08:13.087 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:13 np0005539552 nova_compute[233724]: 2025-11-29 09:08:13.404 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:08:13 np0005539552 nova_compute[233724]: 2025-11-29 09:08:13.405 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:08:13 np0005539552 nova_compute[233724]: 2025-11-29 09:08:13.790 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:14 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:14 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1898643229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:14 np0005539552 nova_compute[233724]: 2025-11-29 09:08:14.246 233728 DEBUG oslo_concurrency.processutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:14 np0005539552 nova_compute[233724]: 2025-11-29 09:08:14.256 233728 DEBUG nova.compute.provider_tree [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 29c97280-aaf3-4c7f-a78a-1c9e8d025371 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:08:14 np0005539552 nova_compute[233724]: 2025-11-29 09:08:14.282 233728 DEBUG nova.scheduler.client.report [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Inventory has not changed for provider 29c97280-aaf3-4c7f-a78a-1c9e8d025371 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:08:14 np0005539552 nova_compute[233724]: 2025-11-29 09:08:14.285 233728 DEBUG nova.compute.resource_tracker [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:08:14 np0005539552 nova_compute[233724]: 2025-11-29 09:08:14.286 233728 DEBUG oslo_concurrency.lockutils [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:14 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:14 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:14 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:14.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:15 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:15 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:15 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:15.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:16 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:16 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:16 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:16.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:16 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:17 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:17 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:17 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:17.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.091 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.092 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.092 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.092 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.092 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.094 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.288 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:18 np0005539552 nova_compute[233724]: 2025-11-29 09:08:18.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:18 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:18 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:18 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:18.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:19 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:19 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:19 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:19.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:19 np0005539552 nova_compute[233724]: 2025-11-29 09:08:19.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:08:20.681 143400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:08:20.681 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:20 np0005539552 ovn_metadata_agent[143394]: 2025-11-29 09:08:20.681 143400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:20 np0005539552 nova_compute[233724]: 2025-11-29 09:08:20.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:20 np0005539552 nova_compute[233724]: 2025-11-29 09:08:20.924 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:08:20 np0005539552 nova_compute[233724]: 2025-11-29 09:08:20.925 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:08:20 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:20 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:20 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:20.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:20 np0005539552 nova_compute[233724]: 2025-11-29 09:08:20.945 233728 DEBUG nova.compute.manager [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:08:20 np0005539552 nova_compute[233724]: 2025-11-29 09:08:20.945 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:20 np0005539552 podman[341824]: 2025-11-29 09:08:20.994635886 +0000 UTC m=+0.085284425 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:08:21 np0005539552 podman[341825]: 2025-11-29 09:08:21.014443829 +0000 UTC m=+0.094719619 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:21 np0005539552 podman[341826]: 2025-11-29 09:08:21.045385311 +0000 UTC m=+0.121831008 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:08:21 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:21 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:21 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:21.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:21 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:22 np0005539552 nova_compute[233724]: 2025-11-29 09:08:22.924 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:22 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:22 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:22 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:23 np0005539552 nova_compute[233724]: 2025-11-29 09:08:23.092 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:23 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:23 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:23 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:23.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:23 np0005539552 nova_compute[233724]: 2025-11-29 09:08:23.094 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:23 np0005539552 nova_compute[233724]: 2025-11-29 09:08:23.919 233728 DEBUG oslo_service.periodic_task [None req-02d81399-d62a-44d0-ab1a-f8d382ed25f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:24 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:24 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:24 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:25 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:25 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:25 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:25.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:26 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:26 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:26 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:26.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:26 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:27 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:27 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:27 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:27.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:28 np0005539552 nova_compute[233724]: 2025-11-29 09:08:28.094 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:28 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:28 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:28 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:28.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:29 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:29 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:29 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:30 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:30 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:30 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:30.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:31 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:31 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:31 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:31.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:31 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:32 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:32 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:32 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:32.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:33 np0005539552 nova_compute[233724]: 2025-11-29 09:08:33.096 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:33 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:33 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:33 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:33.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:34 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:34 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:34 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:35 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:35 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:35 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:35.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:36 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:36 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:36 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:36.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:37 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:37 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:37 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:37 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:37.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:38 np0005539552 nova_compute[233724]: 2025-11-29 09:08:38.098 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:38 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:38 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:38 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:38.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:39 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:39 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:39 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118252547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118252547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:39 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:08:40 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:40 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:40 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:40.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:41 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:41 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:41 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:41.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:41 np0005539552 systemd-logind[788]: New session 74 of user zuul.
Nov 29 04:08:41 np0005539552 systemd[1]: Started Session 74 of User zuul.
Nov 29 04:08:42 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:42 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:42 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:42 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:42.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:43 np0005539552 nova_compute[233724]: 2025-11-29 09:08:43.101 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:43 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:43 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:43 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:43.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:44 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:44 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:44 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:45 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:45 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:45 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:45.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:45 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:08:45 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/415105541' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:08:46 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:46 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:46 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:46.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:47 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:47 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:47 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:47 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:47.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:47 np0005539552 ovs-vsctl[342418]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:08:48 np0005539552 nova_compute[233724]: 2025-11-29 09:08:48.102 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:48 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:08:48 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:08:48 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:48 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:48 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:48.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:48 np0005539552 virtqemud[233098]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:08:49 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:49 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:49 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:49.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:49 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:08:49 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:49 np0005539552 lvm[342751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:08:49 np0005539552 lvm[342751]: VG ceph_vg0 finished
Nov 29 04:08:49 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:08:49 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:08:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/463274311' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:50 np0005539552 ceph-mon[77121]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:08:50 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1634044299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:08:50 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:50 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:50 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:50 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:50.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/819835617' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:08:51 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:08:51 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:51 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:51 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:51 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:51 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:08:51 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:51 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: ops {prefix=ops} (starting...)
Nov 29 04:08:51 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/798364396' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2048757827' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:08:51 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2621918831' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:08:51 np0005539552 podman[343150]: 2025-11-29 09:08:51.991164989 +0000 UTC m=+0.067065224 container health_status 44acf3e434c84a1b4fa12cfb84a76d103154db939e9a33618bdbf1be2e3b93ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Nov 29 04:08:51 np0005539552 podman[343148]: 2025-11-29 09:08:51.991314863 +0000 UTC m=+0.072331186 container health_status 1b1c961e3f22327abf9467bc58bc9498a1fefd106f90e36990a72b71c78aaca0 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:08:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:52 np0005539552 podman[343151]: 2025-11-29 09:08:52.016434749 +0000 UTC m=+0.094540904 container health_status 6495859db7e5497375c22e7d02ed56b31a8bf25ec02701b99436a380fa250535 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 04:08:52 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:08:52 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati Can't run that command on an inactive MDS!
Nov 29 04:08:52 np0005539552 ceph-mds[83636]: mds.cephfs.compute-2.mmoati asok_command: status {prefix=status} (starting...)
Nov 29 04:08:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:08:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1111632099' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:08:52 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:08:52 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1246151770' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:08:52 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:52 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:52 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:53 np0005539552 nova_compute[233724]: 2025-11-29 09:08:53.103 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:53 np0005539552 nova_compute[233724]: 2025-11-29 09:08:53.106 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4140034784' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:08:53 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:53 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:53 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/88922052' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2887102471' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/718279318' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:08:53 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2014720520' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:08:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:08:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3869381312' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:08:54 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:08:54 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2280420758' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:08:54 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:54 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:54 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:54.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:55 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:55 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:55 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:55 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:08:55 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4261497345' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:08:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:08:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1604524500' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc35c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbb0675c00 session 0x55cbac60fe00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6180408 data_alloc: 285212672 data_used: 71872512
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb2f860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 389 ms_handle_reset con 0x55cbaddc0400 session 0x55cbad03ba40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 389 heartbeat osd_stat(store_statfs(0x1956f5000/0x0/0x1bfc00000, data 0x858150d/0x8789000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538722304 unmapped: 59056128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 390 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf40cb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 390 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbafb0de00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538738688 unmapped: 59039744 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 390 heartbeat osd_stat(store_statfs(0x1956d9000/0x0/0x1bfc00000, data 0x859a116/0x87a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 390 handle_osd_map epochs [391,391], i have 391, src has [1,391]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbbe337800 session 0x55cbacdbde00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538787840 unmapped: 58990592 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacffa1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbb79ad000 session 0x55cbadd8f0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaf72b000 session 0x55cbacffa960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf723c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5962050 data_alloc: 268435456 data_used: 53080064
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534847488 unmapped: 62930944 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbade17000 session 0x55cbacffb4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.128280640s of 10.215465546s, submitted: 330
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadab2b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534888448 unmapped: 62889984 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 heartbeat osd_stat(store_statfs(0x1966aa000/0x0/0x1bfc00000, data 0x75cbff8/0x77d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534888448 unmapped: 62889984 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbb315f800 session 0x55cbacf2a780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbaddc0400 session 0x55cbadab3a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 ms_handle_reset con 0x55cbade17000 session 0x55cbad92d4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6013138 data_alloc: 268435456 data_used: 60469248
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 heartbeat osd_stat(store_statfs(0x1968ac000/0x0/0x1bfc00000, data 0x73cadf8/0x75d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536576000 unmapped: 61202432 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536584192 unmapped: 61194240 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 heartbeat osd_stat(store_statfs(0x1968a3000/0x0/0x1bfc00000, data 0x73d1a01/0x75da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536584192 unmapped: 61194240 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaf72b000 session 0x55cbacf2b0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadb085a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddc0400 session 0x55cbafb2e1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6016892 data_alloc: 268435456 data_used: 60477440
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536584192 unmapped: 61194240 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbade17000 session 0x55cbad8932c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbb79ad000 session 0x55cbb2c1b2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbb315f800 session 0x55cbacf2a3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacdc0000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddc0400 session 0x55cbadd6a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbade17000 session 0x55cbaee803c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb2f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.930759430s of 10.084986687s, submitted: 47
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbad03be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551075840 unmapped: 46702592 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551075840 unmapped: 46702592 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf7bc780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551444480 unmapped: 46333952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 553492480 unmapped: 44285952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 heartbeat osd_stat(store_statfs(0x1952db000/0x0/0x1bfc00000, data 0x8998a32/0x8ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6275807 data_alloc: 301989888 data_used: 83369984
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 550305792 unmapped: 47472640 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 393 ms_handle_reset con 0x55cbb79ad000 session 0x55cbadd8e960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544849920 unmapped: 52928512 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 393 heartbeat osd_stat(store_statfs(0x196598000/0x0/0x1bfc00000, data 0x76da747/0x78e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544849920 unmapped: 52928512 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6115293 data_alloc: 285212672 data_used: 67870720
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 393 heartbeat osd_stat(store_statfs(0x196594000/0x0/0x1bfc00000, data 0x76df747/0x78ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.009008408s of 13.678808212s, submitted: 107
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6114365 data_alloc: 285212672 data_used: 67874816
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544866304 unmapped: 52912128 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 393 heartbeat osd_stat(store_statfs(0x196592000/0x0/0x1bfc00000, data 0x76e0747/0x78eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 393 handle_osd_map epochs [394,394], i have 394, src has [1,394]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545087488 unmapped: 52690944 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196589000/0x0/0x1bfc00000, data 0x76e8350/0x78f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 52486144 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545292288 unmapped: 52486144 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196589000/0x0/0x1bfc00000, data 0x76e8350/0x78f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6153235 data_alloc: 285212672 data_used: 70410240
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545308672 unmapped: 52469760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545308672 unmapped: 52469760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196589000/0x0/0x1bfc00000, data 0x76e8350/0x78f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545349632 unmapped: 52428800 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196585000/0x0/0x1bfc00000, data 0x76ed350/0x78f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196585000/0x0/0x1bfc00000, data 0x76ed350/0x78f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6149047 data_alloc: 285212672 data_used: 70406144
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545382400 unmapped: 52396032 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.206936836s of 12.290732384s, submitted: 43
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545628160 unmapped: 52150272 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545660928 unmapped: 52117504 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaeb51a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x19637e000/0x0/0x1bfc00000, data 0x78f1550/0x7afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545660928 unmapped: 52117504 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb6a90c00 session 0x55cbaeb503c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6204428 data_alloc: 285212672 data_used: 70393856
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545767424 unmapped: 52011008 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbadb09860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545775616 unmapped: 52002816 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbadb090e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x1960c5000/0x0/0x1bfc00000, data 0x7bab2ee/0x7db6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf7bcb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbafb825a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb6a90c00 session 0x55cbadd8f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbaf799c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6190146 data_alloc: 285212672 data_used: 70373376
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacf2af00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbafb82b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.875689507s of 10.106302261s, submitted: 86
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545783808 unmapped: 51994624 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 548405248 unmapped: 49373184 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x1960c4000/0x0/0x1bfc00000, data 0x7bae311/0x7dba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 548413440 unmapped: 49364992 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddc0400 session 0x55cbacf730e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf0b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6045295 data_alloc: 285212672 data_used: 65011712
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf52c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545021952 unmapped: 52756480 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaf71b000 session 0x55cbadd8f0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbafb0de00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x196d27000/0x0/0x1bfc00000, data 0x6f4c2e0/0x7156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545046528 unmapped: 52731904 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf7985a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee64f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddc0400 session 0x55cbaf7bc5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5848268 data_alloc: 268435456 data_used: 56311808
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf40c5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbade17000 session 0x55cbacf1cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197e95000/0x0/0x1bfc00000, data 0x5ddf2e0/0x5fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534716416 unmapped: 63062016 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197e96000/0x0/0x1bfc00000, data 0x5ddf27e/0x5fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.425072670s of 12.876208305s, submitted: 82
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5909247 data_alloc: 268435456 data_used: 56352768
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538656768 unmapped: 59121664 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197c6f000/0x0/0x1bfc00000, data 0x600627e/0x620f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538656768 unmapped: 59121664 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 539738112 unmapped: 58040320 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 539738112 unmapped: 58040320 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbbe336400 session 0x55cbaf7990e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad0de400 session 0x55cbaf40c960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x197723000/0x0/0x1bfc00000, data 0x654927e/0x6752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbb0b421e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529760256 unmapped: 68018176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5676237 data_alloc: 251658240 data_used: 46186496
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529760256 unmapped: 68018176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529760256 unmapped: 68018176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x198b65000/0x0/0x1bfc00000, data 0x511124b/0x5318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbaddc0400 session 0x55cbac60f680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529768448 unmapped: 68009984 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x511224b/0x5319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 ms_handle_reset con 0x55cbad910800 session 0x55cbaf799c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529776640 unmapped: 68001792 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbade17000 session 0x55cbacfa2780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbaf71b000 session 0x55cbbadf4f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529776640 unmapped: 68001792 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbbe336400 session 0x55cbacfa3860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5680674 data_alloc: 251658240 data_used: 46305280
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.071531296s of 10.524563789s, submitted: 143
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 395 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad92c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529776640 unmapped: 68001792 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf54d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbacdbde00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbaf798000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 heartbeat osd_stat(store_statfs(0x198b57000/0x0/0x1bfc00000, data 0x5119c73/0x5321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbad910800 session 0x55cbad892f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacc35c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5684084 data_alloc: 251658240 data_used: 46305280
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 heartbeat osd_stat(store_statfs(0x198b57000/0x0/0x1bfc00000, data 0x5119c73/0x5321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530833408 unmapped: 66945024 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530841600 unmapped: 66936832 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x198b59000/0x0/0x1bfc00000, data 0x511b87c/0x5324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 66928640 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb82d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x198b59000/0x0/0x1bfc00000, data 0x511b87c/0x5324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530849792 unmapped: 66928640 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5687156 data_alloc: 251658240 data_used: 46305280
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.944666862s of 10.055811882s, submitted: 32
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbbe336400 session 0x55cbafaf14a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 66920448 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530857984 unmapped: 66920448 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc0f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbad03b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 531988480 unmapped: 65789952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacffa5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbaf7981e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x19838e000/0x0/0x1bfc00000, data 0x58e787c/0x5af0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 531996672 unmapped: 65781760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbfc00 session 0x55cbafb901e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 65773568 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x1983b2000/0x0/0x1bfc00000, data 0x58c387c/0x5acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbacc9e5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5739375 data_alloc: 251658240 data_used: 46194688
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532004864 unmapped: 65773568 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafaf0960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbb2c1b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbaf7bd860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad92cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbb2c1be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbac60eb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532291584 unmapped: 65486848 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb1690000 session 0x55cbacdc0960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbadb09c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbacdbd0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb6a91400 session 0x55cbafb82780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbad910800 session 0x55cbafaf0960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb901e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbad03b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad892f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf54d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532307968 unmapped: 65470464 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 65462272 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbaf7bcb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x197e52000/0x0/0x1bfc00000, data 0x5e218ee/0x602c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb6a91400 session 0x55cbadb090e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532316160 unmapped: 65462272 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeee8800 session 0x55cbaeb51860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbadd8f680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbadb09860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5852778 data_alloc: 268435456 data_used: 54374400
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeee8800 session 0x55cbacdc1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583312988s of 10.015307426s, submitted: 97
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbaf7992c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532324352 unmapped: 65454080 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532324352 unmapped: 65454080 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532332544 unmapped: 65445888 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb1690000 session 0x55cbacdc0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x197e75000/0x0/0x1bfc00000, data 0x5dfd8fe/0x6009000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 68460544 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbafaf1e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 68460544 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbadd8f0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbaee64f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5552550 data_alloc: 251658240 data_used: 39444480
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529317888 unmapped: 68460544 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc354a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbb6a91400 session 0x55cbadd8e960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x198cd3000/0x0/0x1bfc00000, data 0x4f9e88c/0x51a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeee8800 session 0x55cbaf7bc780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 heartbeat osd_stat(store_statfs(0x1993a0000/0x0/0x1bfc00000, data 0x44d188c/0x46db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbade16800 session 0x55cbad03be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacdbde00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 80035840 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 80035840 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 80035840 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518529024 unmapped: 79249408 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19b0d5000/0x0/0x1bfc00000, data 0x28a17a8/0x2aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb821e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5269632 data_alloc: 234881024 data_used: 24211456
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19aef5000/0x0/0x1bfc00000, data 0x2d8250f/0x2f88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.311944008s of 10.724480629s, submitted: 151
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19aee8000/0x0/0x1bfc00000, data 0x2d8f50f/0x2f95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 heartbeat osd_stat(store_statfs(0x19aee8000/0x0/0x1bfc00000, data 0x2d8f50f/0x2f95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5278122 data_alloc: 234881024 data_used: 24236032
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517472256 unmapped: 80306176 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbacf730e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb6a91400 session 0x55cbaf54c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb6a91400 session 0x55cbaf54c960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbade16800 session 0x55cbaf54dc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacf730e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc354a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbb08a4c00 session 0x55cbaee64f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbade16800 session 0x55cbafaf1e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacdc0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a2b6000/0x0/0x1bfc00000, data 0x39bf128/0x3bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517931008 unmapped: 79847424 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacdc1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb6a91400 session 0x55cbadb09860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1690000 session 0x55cbadd8f680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5384906 data_alloc: 234881024 data_used: 24244224
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbacf2a3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517799936 unmapped: 79978496 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad8932c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb6a91400 session 0x55cbacf2b0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbafb0cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7985a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbad03ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517857280 unmapped: 79921152 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf1680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.473110199s of 10.862102509s, submitted: 99
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade16800 session 0x55cbaeb51860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbad892f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 517857280 unmapped: 79921152 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbad03b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf0960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199424000/0x0/0x1bfc00000, data 0x485019a/0x4a5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518004736 unmapped: 79773696 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbbadf4f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbad92c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518004736 unmapped: 79773696 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbb0b425a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5410732 data_alloc: 234881024 data_used: 22941696
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3c00 session 0x55cbac60e780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511631360 unmapped: 86147072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbaee643c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511631360 unmapped: 86147072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf54cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbb2c1be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade16800 session 0x55cbaee801e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511631360 unmapped: 86147072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a0b5000/0x0/0x1bfc00000, data 0x3bbd1cd/0x3dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 511639552 unmapped: 86138880 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbafb0c5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb0cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510623744 unmapped: 87154688 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5263850 data_alloc: 234881024 data_used: 15843328
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19ac67000/0x0/0x1bfc00000, data 0x2f6b1ad/0x3175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510623744 unmapped: 87154688 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510607360 unmapped: 87171072 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510484480 unmapped: 87293952 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbaf7983c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacffaf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.106935501s of 11.330339432s, submitted: 72
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadd8e000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5240513 data_alloc: 234881024 data_used: 26144768
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19b92d000/0x0/0x1bfc00000, data 0x234817a/0x2550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19b92d000/0x0/0x1bfc00000, data 0x234817a/0x2550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19b92d000/0x0/0x1bfc00000, data 0x234817a/0x2550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x21d7f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5240513 data_alloc: 234881024 data_used: 26144768
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 510492672 unmapped: 87285760 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.731653214s of 10.003818512s, submitted: 100
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514678784 unmapped: 83099648 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514760704 unmapped: 83017728 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19abfd000/0x0/0x1bfc00000, data 0x2c6917a/0x2e71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5331677 data_alloc: 234881024 data_used: 27185152
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 514760704 unmapped: 83017728 heap: 597778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0400 session 0x55cbaf7bcd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbb0b42960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbb0b42b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbb2c1ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526884864 unmapped: 74571776 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee81a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade16800 session 0x55cbbadf4960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbad92c000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacc34d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacf2a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515047424 unmapped: 86409216 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee643c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb6a91400 session 0x55cbacdc0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbadb09e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacf2be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515047424 unmapped: 86409216 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacdc1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19967e000/0x0/0x1bfc00000, data 0x41e71dc/0x43f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee801e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515244032 unmapped: 86212608 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe2000 session 0x55cbb0b434a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5535977 data_alloc: 234881024 data_used: 27185152
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519135232 unmapped: 82321408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbb2c1b4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb82780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbafb2e1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbb2c1a3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb2c3d800 session 0x55cbacfdfe00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee805a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaee64f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519233536 unmapped: 82223104 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbafb0cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb0d0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeec3400 session 0x55cbad03be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacf72000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbadd8fe00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb0550c00 session 0x55cbafb834a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaf40d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 515612672 unmapped: 85843968 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516751360 unmapped: 84705280 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.082863808s of 10.739765167s, submitted: 171
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb7cc5000 session 0x55cbb0b42960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984e9000/0x0/0x1bfc00000, data 0x5379271/0x5585000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 516915200 unmapped: 84541440 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799410 data_alloc: 251658240 data_used: 42762240
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 518610944 unmapped: 82845696 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 79323136 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf40cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 79323136 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbbadf45a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984e9000/0x0/0x1bfc00000, data 0x5379271/0x5585000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522133504 unmapped: 79323136 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf7bd860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525082624 unmapped: 76374016 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacffa5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbafb0c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbafb82d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5875503 data_alloc: 268435456 data_used: 52334592
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525254656 unmapped: 76201984 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbb0b42780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb0550c00 session 0x55cbbadf4d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 525271040 unmapped: 76185600 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf799c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1985ab000/0x0/0x1bfc00000, data 0x4fb52a4/0x51c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527613952 unmapped: 73842688 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527613952 unmapped: 73842688 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.840907097s of 10.001074791s, submitted: 46
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536084480 unmapped: 65372160 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6032979 data_alloc: 268435456 data_used: 60067840
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19781f000/0x0/0x1bfc00000, data 0x603d2a4/0x624b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538886144 unmapped: 62570496 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 538902528 unmapped: 62554112 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19777f000/0x0/0x1bfc00000, data 0x60cb2a4/0x62d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544604160 unmapped: 56852480 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbb2c1ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee81c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5893419 data_alloc: 251658240 data_used: 49594368
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbaf54d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb90960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545333248 unmapped: 56123392 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19816d000/0x0/0x1bfc00000, data 0x56f32a4/0x5901000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf54cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19816d000/0x0/0x1bfc00000, data 0x56f32a4/0x5901000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 536043520 unmapped: 65413120 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.705724716s of 10.454721451s, submitted: 370
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535977984 unmapped: 65478656 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199e3b000/0x0/0x1bfc00000, data 0x3a2720f/0x3c32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbaf5a4d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5545662 data_alloc: 234881024 data_used: 30437376
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 537288704 unmapped: 64167936 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1997f9000/0x0/0x1bfc00000, data 0x406120f/0x426c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbadab3860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbafb0c5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 540704768 unmapped: 60751872 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbad92c960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522182656 unmapped: 79273984 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199950000/0x0/0x1bfc00000, data 0x2d7516a/0x2f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522190848 unmapped: 79265792 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522190848 unmapped: 79265792 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19994a000/0x0/0x1bfc00000, data 0x2d7b16a/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5360857 data_alloc: 234881024 data_used: 22663168
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522199040 unmapped: 79257600 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbac610b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf5a4b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5358729 data_alloc: 234881024 data_used: 22667264
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199949000/0x0/0x1bfc00000, data 0x2d7e16a/0x2f85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522207232 unmapped: 79249408 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.413434029s of 13.200506210s, submitted: 176
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b42b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbbadf5c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbb2c1ad20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a260000/0x0/0x1bfc00000, data 0x246815a/0x266e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5255618 data_alloc: 218103808 data_used: 18554880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5255618 data_alloc: 218103808 data_used: 18554880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19a260000/0x0/0x1bfc00000, data 0x246815a/0x266e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 522231808 unmapped: 79224832 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbacdbd2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddbf800 session 0x55cbbadf4960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbad92d2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbacdc1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.313246727s of 10.420093536s, submitted: 40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 521584640 unmapped: 79872000 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb2c1a3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283884 data_alloc: 218103808 data_used: 18554880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f8c000/0x0/0x1bfc00000, data 0x273c15a/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f8c000/0x0/0x1bfc00000, data 0x273c15a/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaee64d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaddc0800 session 0x55cbaf7983c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbac60eb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf54c1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5294522 data_alloc: 218103808 data_used: 19697664
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f8b000/0x0/0x1bfc00000, data 0x273c16a/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.298153877s of 10.742022514s, submitted: 11
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306910 data_alloc: 218103808 data_used: 21381120
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f89000/0x0/0x1bfc00000, data 0x273d16a/0x2944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519397376 unmapped: 82059264 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x199f89000/0x0/0x1bfc00000, data 0x273d16a/0x2944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbad893680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacdc01e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf798000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacf64960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519405568 unmapped: 82051072 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbafb90960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbacdc1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf5a45a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372099 data_alloc: 218103808 data_used: 21381120
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacf2a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbafb0d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbafb83a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1997e1000/0x0/0x1bfc00000, data 0x2ee41dc/0x30ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 519413760 unmapped: 82042880 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524247040 unmapped: 77209600 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 524263424 unmapped: 77193216 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x198b74000/0x0/0x1bfc00000, data 0x3b501ec/0x3d5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [1,0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbacdbcb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526614528 unmapped: 74842112 heap: 601456640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbacf1cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb834a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbacc34f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbacc9fa40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bd0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.667600632s of 11.097998619s, submitted: 137
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeab1400 session 0x55cbb0b43c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526442496 unmapped: 76627968 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5551963 data_alloc: 234881024 data_used: 22306816
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb5fe3400 session 0x55cbacc9f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526442496 unmapped: 76627968 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbadb090e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526450688 unmapped: 76619776 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526327808 unmapped: 76742656 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1971a5000/0x0/0x1bfc00000, data 0x437e1fb/0x4589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb2e1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526295040 unmapped: 76775424 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaf71b000 session 0x55cbac60e780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 526295040 unmapped: 76775424 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbbadf4780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbad92c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5612472 data_alloc: 234881024 data_used: 29851648
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527351808 unmapped: 75718656 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19818d000/0x0/0x1bfc00000, data 0x33961fb/0x35a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafaf1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbadd6b2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafaf1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf7bc3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb0550c00 session 0x55cbaee652c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbac60f680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4000 session 0x55cbac60fe00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527360000 unmapped: 75710464 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.365715027s of 10.577359200s, submitted: 53
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb82b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaf7bc3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb68f9800 session 0x55cbafaf1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbbadf4780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5498166 data_alloc: 234881024 data_used: 26230784
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb0d0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527523840 unmapped: 75546624 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbacdbc960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527572992 unmapped: 75497472 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197dbb000/0x0/0x1bfc00000, data 0x376622e/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197dbb000/0x0/0x1bfc00000, data 0x376622e/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 75489280 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbaee654a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 75489280 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbacbdd800 session 0x55cbaf799c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbad910800 session 0x55cbaf799680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529006592 unmapped: 74063872 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5406708 data_alloc: 234881024 data_used: 27930624
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529006592 unmapped: 74063872 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb0da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x198b57000/0x0/0x1bfc00000, data 0x29be1a9/0x2bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529006592 unmapped: 74063872 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 530644992 unmapped: 72425472 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb79ad000 session 0x55cbbadf4f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaf7bc960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529047552 unmapped: 74022912 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984a5000/0x0/0x1bfc00000, data 0x30741dc/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529047552 unmapped: 74022912 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5478205 data_alloc: 234881024 data_used: 28155904
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529055744 unmapped: 74014720 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 529055744 unmapped: 74014720 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.036070824s of 12.483406067s, submitted: 129
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 532774912 unmapped: 70295552 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1984a5000/0x0/0x1bfc00000, data 0x30741dc/0x3281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaee643c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534011904 unmapped: 69058560 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534323200 unmapped: 68747264 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5578405 data_alloc: 234881024 data_used: 33165312
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 534339584 unmapped: 68730880 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197cf8000/0x0/0x1bfc00000, data 0x38231dc/0x3a30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535912448 unmapped: 67158016 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535912448 unmapped: 67158016 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5636853 data_alloc: 251658240 data_used: 39923712
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197cdc000/0x0/0x1bfc00000, data 0x383d1dc/0x3a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.068302155s of 10.354182243s, submitted: 105
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535920640 unmapped: 67149824 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x197ce4000/0x0/0x1bfc00000, data 0x383d1dc/0x3a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5631953 data_alloc: 251658240 data_used: 39940096
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 67125248 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 67125248 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19793e000/0x0/0x1bfc00000, data 0x3be31dc/0x3df0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 535945216 unmapped: 67125248 heap: 603070464 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 541581312 unmapped: 63111168 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19689b000/0x0/0x1bfc00000, data 0x4c861dc/0x4e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,3,7])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542744576 unmapped: 61947904 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacdc0960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807829 data_alloc: 251658240 data_used: 41074688
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542973952 unmapped: 61718528 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 61546496 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1967c1000/0x0/0x1bfc00000, data 0x4d601dc/0x4f6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 61546496 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.174084663s of 10.545331955s, submitted: 135
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x1967c1000/0x0/0x1bfc00000, data 0x4d601dc/0x4f6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543219712 unmapped: 61472768 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543219712 unmapped: 61472768 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5849323 data_alloc: 251658240 data_used: 43442176
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaf5a4000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543219712 unmapped: 61472768 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 61775872 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544317440 unmapped: 60375040 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19679c000/0x0/0x1bfc00000, data 0x4d841ff/0x4f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544317440 unmapped: 60375040 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544317440 unmapped: 60375040 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 heartbeat osd_stat(store_statfs(0x19679c000/0x0/0x1bfc00000, data 0x4d841ff/0x4f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5866080 data_alloc: 251658240 data_used: 46317568
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544489472 unmapped: 60203008 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544489472 unmapped: 60203008 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad893680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544489472 unmapped: 60203008 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.959457397s of 10.100363731s, submitted: 61
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 400 ms_handle_reset con 0x55cbb08a4800 session 0x55cbafb2ed20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544505856 unmapped: 60186624 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 60137472 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 400 heartbeat osd_stat(store_statfs(0x196796000/0x0/0x1bfc00000, data 0x4d87f22/0x4f97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5872762 data_alloc: 251658240 data_used: 46297088
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 400 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf5a43c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 544555008 unmapped: 60137472 heap: 604692480 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb0550400 session 0x55cbadd6b2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb2f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbaeee4800 session 0x55cbacf2b860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbacdc1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb08a4800 session 0x55cbacdc01e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552828928 unmapped: 63340544 heap: 616169472 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbb076f400 session 0x55cbacf2af00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad03b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 402 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 402 ms_handle_reset con 0x55cbaeee6800 session 0x55cbad8925a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 402 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf7bd0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558546944 unmapped: 65331200 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 402 heartbeat osd_stat(store_statfs(0x194d4a000/0x0/0x1bfc00000, data 0x67cfa80/0x69e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [0,0,0,0,5])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb6a92c00 session 0x55cbafb832c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbb2c1ba40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555393024 unmapped: 68485120 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbb2c1b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbb2c1b2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbb2c1a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb076f400 session 0x55cbb2c1b860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb076f400 session 0x55cbad0305a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbacf2b0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555409408 unmapped: 68468736 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6170750 data_alloc: 251658240 data_used: 53186560
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556638208 unmapped: 67239936 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf40dc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbb0b42d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbb0b430e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556638208 unmapped: 67239936 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556638208 unmapped: 67239936 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.424963951s of 10.320842743s, submitted: 248
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555974656 unmapped: 67903488 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 heartbeat osd_stat(store_statfs(0x19445d000/0x0/0x1bfc00000, data 0x70bb821/0x72d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbafb0d2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafb0c960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbac60e960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555974656 unmapped: 67903488 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbacc9e5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4c00 session 0x55cbafb0cb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb076f400 session 0x55cbad92c5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafaf0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaf7bc000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6169690 data_alloc: 251658240 data_used: 53256192
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb0550400 session 0x55cbacfdfc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaee81860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbaee81680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555974656 unmapped: 67903488 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee4800 session 0x55cbb2c1b0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 ms_handle_reset con 0x55cbaeee6800 session 0x55cbad8925a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb076f400 session 0x55cbacdc0780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf7bd0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbafb832c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee4800 session 0x55cbaf5a4d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacdbcf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555982848 unmapped: 67895296 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555982848 unmapped: 67895296 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbad910800 session 0x55cbacf64960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbacf64780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbad910800 session 0x55cbac6114a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556007424 unmapped: 67870720 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbade8b2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x194457000/0x0/0x1bfc00000, data 0x70bd49c/0x72d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x244cf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee4800 session 0x55cbaf40d4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb79ad000 session 0x55cbb0b42f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556007424 unmapped: 67870720 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacf1cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb2fe00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5814598 data_alloc: 251658240 data_used: 39763968
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbacfdfe00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551477248 unmapped: 72400896 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbacdc01e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551796736 unmapped: 72081408 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551796736 unmapped: 72081408 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x19610b000/0x0/0x1bfc00000, data 0x4ff84cf/0x5213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551821312 unmapped: 72056832 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.301959991s of 10.479634285s, submitted: 84
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bd860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeab1400 session 0x55cbaf7992c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbade8ba40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546512896 unmapped: 77365248 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5741841 data_alloc: 234881024 data_used: 37466112
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196a09000/0x0/0x1bfc00000, data 0x463544e/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5756401 data_alloc: 251658240 data_used: 39563264
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196a09000/0x0/0x1bfc00000, data 0x463544e/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196a09000/0x0/0x1bfc00000, data 0x463544e/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546537472 unmapped: 77340672 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.244743347s of 11.347191811s, submitted: 38
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5763340 data_alloc: 251658240 data_used: 39784448
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 74416128 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5822256 data_alloc: 251658240 data_used: 41963520
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549543936 unmapped: 74334208 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaee81680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf5a4d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x196776000/0x0/0x1bfc00000, data 0x499044e/0x4ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbb0b42f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb1dd6800 session 0x55cbad03b680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54c3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbacf2af00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbad92d2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaf40c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb68f9000 session 0x55cbadb09c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195eb2000/0x0/0x1bfc00000, data 0x525345e/0x546c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5889166 data_alloc: 251658240 data_used: 42364928
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195eb2000/0x0/0x1bfc00000, data 0x525345e/0x546c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546365440 unmapped: 77512704 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546373632 unmapped: 77504512 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5889166 data_alloc: 251658240 data_used: 42364928
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbaeb51c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546373632 unmapped: 77504512 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195eb2000/0x0/0x1bfc00000, data 0x525345e/0x546c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaf7bde00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546373632 unmapped: 77504512 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbafaf1a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.031064987s of 17.208616257s, submitted: 30
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbafaf0000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546381824 unmapped: 77496320 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195e8d000/0x0/0x1bfc00000, data 0x527746d/0x5491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 553263104 unmapped: 70615040 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb5fe3c00 session 0x55cbacf2a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbade17000 session 0x55cbad03a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaee805a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb0550400 session 0x55cbad03bc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaeb501e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547479552 unmapped: 76398592 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6022662 data_alloc: 251658240 data_used: 48254976
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549560320 unmapped: 74317824 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x1955cc000/0x0/0x1bfc00000, data 0x5b3846d/0x5d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552304640 unmapped: 71573504 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552304640 unmapped: 71573504 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x195398000/0x0/0x1bfc00000, data 0x5d6946d/0x5f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 552304640 unmapped: 71573504 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 550281216 unmapped: 73596928 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbb6aa0800 session 0x55cbafb2e3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058607 data_alloc: 251658240 data_used: 50548736
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 heartbeat osd_stat(store_statfs(0x19539a000/0x0/0x1bfc00000, data 0x5d69490/0x5f84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 550281216 unmapped: 73596928 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb0da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551378944 unmapped: 72499200 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.871469498s of 10.079981804s, submitted: 45
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbb0550400 session 0x55cbb0b42b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbac6114a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbacf64960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 405 heartbeat osd_stat(store_statfs(0x19539a000/0x0/0x1bfc00000, data 0x5d69490/0x5f84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 557375488 unmapped: 66502656 heap: 623878144 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 405 ms_handle_reset con 0x55cbaeee6c00 session 0x55cbacdbd680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 406 ms_handle_reset con 0x55cbaeee6800 session 0x55cbaf40d680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 556244992 unmapped: 71835648 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbb0550400 session 0x55cbad892000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 557318144 unmapped: 70762496 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6320057 data_alloc: 268435456 data_used: 64593920
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 heartbeat osd_stat(store_statfs(0x193b8e000/0x0/0x1bfc00000, data 0x756eccb/0x778e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 559570944 unmapped: 68509696 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 heartbeat osd_stat(store_statfs(0x193043000/0x0/0x1bfc00000, data 0x80b3ccb/0x82d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbacf1d0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 560832512 unmapped: 67248128 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbaeee9800 session 0x55cbad03ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 ms_handle_reset con 0x55cbb08a6000 session 0x55cbaf40cb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564174848 unmapped: 63905792 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563585024 unmapped: 64495616 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbafb2ed20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbb08a6000 session 0x55cbafb2e000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564658176 unmapped: 63422464 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 408 heartbeat osd_stat(store_statfs(0x19469a000/0x0/0x1bfc00000, data 0x6a629fc/0x6c82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6283380 data_alloc: 268435456 data_used: 66387968
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbad910800 session 0x55cbb2c1a1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 408 ms_handle_reset con 0x55cbaeee4800 session 0x55cbb2c1a000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564658176 unmapped: 63422464 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 handle_osd_map epochs [409,409], i have 409, src has [1,409]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb79ad000 session 0x55cbafb2f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb076f400 session 0x55cbac610b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbaeee6800 session 0x55cbb2c1b2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbad910800 session 0x55cbafb82b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564690944 unmapped: 63389696 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564699136 unmapped: 63381504 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.496852875s of 10.482481003s, submitted: 310
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 heartbeat osd_stat(store_statfs(0x194e78000/0x0/0x1bfc00000, data 0x6286559/0x64a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567771136 unmapped: 60309504 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaf7bcb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568074240 unmapped: 60006400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 ms_handle_reset con 0x55cbb08a6000 session 0x55cbad030000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 6321417 data_alloc: 268435456 data_used: 63418368
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 410 ms_handle_reset con 0x55cbaeee4800 session 0x55cbacc9fa40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568082432 unmapped: 59998208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 410 heartbeat osd_stat(store_statfs(0x19493d000/0x0/0x1bfc00000, data 0x676021c/0x697b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568107008 unmapped: 59973632 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbad910800 session 0x55cbafaf0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567296000 unmapped: 60784640 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbb08a7000 session 0x55cbad92cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbb315f000 session 0x55cbafaf1e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbaeee6800 session 0x55cbafb83680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 heartbeat osd_stat(store_statfs(0x1954c5000/0x0/0x1bfc00000, data 0x55d9b56/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 ms_handle_reset con 0x55cbaeee4800 session 0x55cbaeb503c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567320576 unmapped: 60760064 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 413 ms_handle_reset con 0x55cbad910800 session 0x55cbaee654a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567328768 unmapped: 60751872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5921184 data_alloc: 251658240 data_used: 43716608
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567328768 unmapped: 60751872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 heartbeat osd_stat(store_statfs(0x195c84000/0x0/0x1bfc00000, data 0x4e1a8a6/0x5035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567336960 unmapped: 60743680 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b43c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf14a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567336960 unmapped: 60743680 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.324023247s of 10.062682152s, submitted: 268
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbb08a7000 session 0x55cbad03be00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbb08a7000 session 0x55cbaf40c1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbad910800 session 0x55cbadd6a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf5a5a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad031860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554483712 unmapped: 73596928 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf7990e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaf71b000 session 0x55cbaf54c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbad910800 session 0x55cbafb83a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4000 session 0x55cbad03b860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 ms_handle_reset con 0x55cbaeee4800 session 0x55cbadd6a000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 73129984 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5812632 data_alloc: 234881024 data_used: 30158848
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 73129984 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554950656 unmapped: 73129984 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x196831000/0x0/0x1bfc00000, data 0x48d0105/0x4aec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a7000 session 0x55cbafb0d4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf52c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554958848 unmapped: 73121792 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a7000 session 0x55cbafaf1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbaf54cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x196832000/0x0/0x1bfc00000, data 0x48d00f5/0x4aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549445632 unmapped: 78635008 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacdc1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197eb1000/0x0/0x1bfc00000, data 0x32510d2/0x346b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549445632 unmapped: 78635008 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5585426 data_alloc: 218103808 data_used: 21164032
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197eb1000/0x0/0x1bfc00000, data 0x32510d2/0x346b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4800 session 0x55cbad03a780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549445632 unmapped: 78635008 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafb0d4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbadd6a000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549593088 unmapped: 78487552 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.0 interval#012Cumulative writes: 74K writes, 299K keys, 74K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.06 MB/s#012Cumulative WAL: 74K writes, 27K syncs, 2.70 writes per sync, written: 0.30 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 47K keys, 11K commit groups, 1.0 writes per commit group, ingest: 54.15 MB, 0.09 MB/s#012Interval WAL: 11K writes, 4590 syncs, 2.57 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549593088 unmapped: 78487552 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549593088 unmapped: 78487552 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197e8f000/0x0/0x1bfc00000, data 0x32750d2/0x348f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650616 data_alloc: 234881024 data_used: 29179904
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.950669289s of 15.333333969s, submitted: 141
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197e8d000/0x0/0x1bfc00000, data 0x32760d2/0x3490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5651740 data_alloc: 234881024 data_used: 29241344
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197e8d000/0x0/0x1bfc00000, data 0x32760d2/0x3490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 78618624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: mgrc ms_handle_reset ms_handle_reset con 0x55cbaf1b1400
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551968768 unmapped: 76111872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x197d25000/0x0/0x1bfc00000, data 0x33df0d2/0x35f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5736426 data_alloc: 234881024 data_used: 29388800
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 554901504 unmapped: 73179136 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555212800 unmapped: 72867840 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19729c000/0x0/0x1bfc00000, data 0x3e670d2/0x4081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555212800 unmapped: 72867840 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19729c000/0x0/0x1bfc00000, data 0x3e670d2/0x4081000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555237376 unmapped: 72843264 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555237376 unmapped: 72843264 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5756062 data_alloc: 234881024 data_used: 29851648
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555237376 unmapped: 72843264 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.927907944s of 13.271549225s, submitted: 115
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19727c000/0x0/0x1bfc00000, data 0x3e880d2/0x40a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555040768 unmapped: 73039872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19727c000/0x0/0x1bfc00000, data 0x3e880d2/0x40a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 555040768 unmapped: 73039872 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbafb83a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbad031860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a7000 session 0x55cbafb83680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40dc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4800 session 0x55cbadb085a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbacdbc960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc9f4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf54c960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbb0b432c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbafb91a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaee645a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5561920 data_alloc: 218103808 data_used: 20439040
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x19831a000/0x0/0x1bfc00000, data 0x2d1e0d2/0x2f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbbadf43c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbbe336c00 session 0x55cbadb09e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbadd8f680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 549650432 unmapped: 78430208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaeb505a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacf64780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542007296 unmapped: 86073344 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacdc0780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbaf40da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5337530 data_alloc: 218103808 data_used: 10829824
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542326784 unmapped: 85753856 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542285824 unmapped: 85794816 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.337329865s of 13.674113274s, submitted: 121
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372354 data_alloc: 218103808 data_used: 15384576
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542285824 unmapped: 85794816 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbac60eb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543342592 unmapped: 84738048 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbaf5a50e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bcf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5371867 data_alloc: 218103808 data_used: 15384576
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5371867 data_alloc: 218103808 data_used: 15384576
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.320820808s of 10.345089912s, submitted: 7
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 542294016 unmapped: 85786624 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543358976 unmapped: 84721664 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543416320 unmapped: 84664320 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543440896 unmapped: 84639744 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 84606976 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372363 data_alloc: 218103808 data_used: 15388672
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbbe336c00 session 0x55cbafb823c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbade8ad20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5372027 data_alloc: 218103808 data_used: 15388672
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbacdbd2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbbadf50e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199770000/0x0/0x1bfc00000, data 0x19950c3/0x1bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.134310722s of 13.202730179s, submitted: 246
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb0672c00 session 0x55cbacdbc000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb83860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543490048 unmapped: 84590592 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb076f400 session 0x55cbafb0da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5378533 data_alloc: 218103808 data_used: 15388672
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543645696 unmapped: 84434944 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543645696 unmapped: 84434944 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543645696 unmapped: 84434944 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf40d680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacdbd2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 545996800 unmapped: 82083840 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1988b3000/0x0/0x1bfc00000, data 0x24410e6/0x265b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543481856 unmapped: 84598784 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbaee641e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb08a5c00 session 0x55cbaf40c000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b423c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafb0c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5306622 data_alloc: 218103808 data_used: 10833920
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacdbd860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbade8ad20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacf2ad20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbb0b43860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafb2ef00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x199757000/0x0/0x1bfc00000, data 0x159e0d6/0x17b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc1680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7bcf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543498240 unmapped: 84582400 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.711401939s of 10.419783592s, submitted: 152
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf71b000 session 0x55cbafaf1a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543457280 unmapped: 84623360 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280d6/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5329199 data_alloc: 218103808 data_used: 10825728
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543408128 unmapped: 84672512 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbad910800 session 0x55cbadab3860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbade17000 session 0x55cbaeb503c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaf40d860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543383552 unmapped: 84697088 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacdc0780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543383552 unmapped: 84697088 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbaee652c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbacf645a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280c3/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 84541440 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543539200 unmapped: 84541440 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5333477 data_alloc: 218103808 data_used: 10829824
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543334400 unmapped: 84746240 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543129600 unmapped: 84951040 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543129600 unmapped: 84951040 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacc34f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280c3/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.894233704s of 10.058961868s, submitted: 49
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdbd4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5349621 data_alloc: 218103808 data_used: 13037568
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb315f000 session 0x55cbacdbdc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 heartbeat osd_stat(store_statfs(0x1995cd000/0x0/0x1bfc00000, data 0x17280c3/0x1941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbb0550400 session 0x55cbaf5a5680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543145984 unmapped: 84934656 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacf2af00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaeee9800 session 0x55cbafb82b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 ms_handle_reset con 0x55cbaf72a800 session 0x55cbaee64d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543506432 unmapped: 84574208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543506432 unmapped: 84574208 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb315f000 session 0x55cbadd6a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a0a1000/0x0/0x1bfc00000, data 0x1c93125/0x1ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23caf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5404768 data_alloc: 218103808 data_used: 13045760
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 84606976 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 543473664 unmapped: 84606976 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a09d000/0x0/0x1bfc00000, data 0x1c94e48/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x23caf9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547831808 unmapped: 80248832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5508996 data_alloc: 218103808 data_used: 14086144
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546889728 unmapped: 81190912 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x198464000/0x0/0x1bfc00000, data 0x2726e48/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.202218056s of 13.709792137s, submitted: 167
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbacc34f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbade8ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546799616 unmapped: 81281024 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546799616 unmapped: 81281024 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546799616 unmapped: 81281024 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5506968 data_alloc: 218103808 data_used: 14086144
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbacdbcb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacf723c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x19846b000/0x0/0x1bfc00000, data 0x2726eaa/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacc9e5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf72a800 session 0x55cbafb2ed20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 546807808 unmapped: 81272832 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf798960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbadab3860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafaf1a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf71e400 session 0x55cbacdc1680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547086336 unmapped: 80994304 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbade12000 session 0x55cbafb2ef00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafb0c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf71e400 session 0x55cbb0b423c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbaee641e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafaf1860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5595395 data_alloc: 218103808 data_used: 14757888
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 80961536 heap: 628080640 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb0550800 session 0x55cbaf40d680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbbadf50e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdc0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb315f000 session 0x55cbafaf0b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547151872 unmapped: 84606976 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x196fc2000/0x0/0x1bfc00000, data 0x3bccf2c/0x3dec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbacc9f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaf71e400 session 0x55cbbadf5c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacc35680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547176448 unmapped: 84582400 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547184640 unmapped: 84574208 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x196fe7000/0x0/0x1bfc00000, data 0x3ba8f1c/0x3dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdbd2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.957546234s of 12.299890518s, submitted: 90
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc34b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547201024 unmapped: 84557824 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5721147 data_alloc: 218103808 data_used: 18468864
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 heartbeat osd_stat(store_statfs(0x196fe5000/0x0/0x1bfc00000, data 0x3ba8f4f/0x3dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547201024 unmapped: 84557824 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9400 session 0x55cbacdc10e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbb076e800 session 0x55cbaee80f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547217408 unmapped: 84541440 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee4000 session 0x55cbafb83a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaf40c960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547356672 unmapped: 84402176 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbaeee9400 session 0x55cbadab3a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x196fe1000/0x0/0x1bfc00000, data 0x3baac87/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbb315f000 session 0x55cbac610d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 547397632 unmapped: 84361216 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbafafa800 session 0x55cbb0b42d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5792626 data_alloc: 234881024 data_used: 34205696
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754d000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551469056 unmapped: 80289792 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754d000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbaeee9400 session 0x55cbafb83680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754d000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbaeee9800 session 0x55cbbadf5c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551477248 unmapped: 80281600 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551477248 unmapped: 80281600 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.157319069s of 10.465465546s, submitted: 102
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 ms_handle_reset con 0x55cbb315f000 session 0x55cbacdc10e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 heartbeat osd_stat(store_statfs(0x19754e000/0x0/0x1bfc00000, data 0x363fc25/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5794918 data_alloc: 234881024 data_used: 34267136
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 80273408 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 80273408 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x197549000/0x0/0x1bfc00000, data 0x364183e/0x3864000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf40dc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbade16000 session 0x55cbaf5a5680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 551493632 unmapped: 80265216 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee805a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaeee9400 session 0x55cbacdc0960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 560357376 unmapped: 71401472 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 561561600 unmapped: 70197248 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6028589 data_alloc: 234881024 data_used: 36491264
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564879360 unmapped: 66879488 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x193ea4000/0x0/0x1bfc00000, data 0x5b3e8a0/0x5d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564895744 unmapped: 66863104 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564895744 unmapped: 66863104 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564895744 unmapped: 66863104 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563970048 unmapped: 67788800 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x193e99000/0x0/0x1bfc00000, data 0x5b498a0/0x5d6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6118457 data_alloc: 234881024 data_used: 37609472
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x193e99000/0x0/0x1bfc00000, data 0x5b498a0/0x5d6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563970048 unmapped: 67788800 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563970048 unmapped: 67788800 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.216553688s of 11.862694740s, submitted: 307
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563986432 unmapped: 67772416 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaeee4000 session 0x55cbacdbd2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaeee9800 session 0x55cbad92cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5847102 data_alloc: 234881024 data_used: 23478272
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbafb2e1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbaee801e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x194d1d000/0x0/0x1bfc00000, data 0x439681b/0x45b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf7990e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 heartbeat osd_stat(store_statfs(0x194d1d000/0x0/0x1bfc00000, data 0x439681b/0x45b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 558514176 unmapped: 73244672 heap: 631758848 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaeee4000 session 0x55cbbadf4000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacdbc5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaeee9400 session 0x55cbbadf5a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbad910800 session 0x55cbac6112c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf1e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafaf10e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568983552 unmapped: 76800000 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafb0c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 420 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaeb505a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 420 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b42f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 420 ms_handle_reset con 0x55cbad910800 session 0x55cbafb0da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569016320 unmapped: 76767232 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbaeee9800 session 0x55cbadab2d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5920924 data_alloc: 234881024 data_used: 36601856
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbaddbf800 session 0x55cbafb0d2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569040896 unmapped: 76742656 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54d680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbade17000 session 0x55cbb0b434a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbad910800 session 0x55cbb0b43680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb0b42b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 heartbeat osd_stat(store_statfs(0x195319000/0x0/0x1bfc00000, data 0x46cd069/0x48f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569057280 unmapped: 76726272 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569065472 unmapped: 76718080 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569778176 unmapped: 76005376 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.811343193s of 12.625464439s, submitted: 167
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565067776 unmapped: 80715776 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5651521 data_alloc: 218103808 data_used: 19386368
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafb825a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x196c93000/0x0/0x1bfc00000, data 0x2d54c2f/0x2f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5654407 data_alloc: 218103808 data_used: 19390464
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x196c93000/0x0/0x1bfc00000, data 0x2d54c2f/0x2f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565084160 unmapped: 80699392 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb6aa1000 session 0x55cbafb0cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567312384 unmapped: 78471168 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbad910800 session 0x55cbaf5a4d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbade17000 session 0x55cbaf5a5c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf5a41e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb315f000 session 0x55cbafb2eb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb7cc5400 session 0x55cbafb2f860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbad910800 session 0x55cbac611860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x196c93000/0x0/0x1bfc00000, data 0x2d54c2f/0x2f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbaeee4000 session 0x55cbb2c1ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbb315f000 session 0x55cbbadf50e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.170543671s of 10.137235641s, submitted: 143
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 576528384 unmapped: 69255168 heap: 645783552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbbe336400 session 0x55cbafb914a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5943804 data_alloc: 234881024 data_used: 38084608
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbbe336400 session 0x55cbadd6a000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573972480 unmapped: 77668352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbad910800 session 0x55cbacc9fa40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 heartbeat osd_stat(store_statfs(0x1952a3000/0x0/0x1bfc00000, data 0x473dc3f/0x4963000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573972480 unmapped: 77668352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbade17000 session 0x55cbafaf0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573980672 unmapped: 77660160 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf5a4d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 574136320 unmapped: 77504512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 422 handle_osd_map epochs [423,423], i have 423, src has [1,423]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 423 ms_handle_reset con 0x55cbb4ebdc00 session 0x55cbafb91e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807206 data_alloc: 234881024 data_used: 24678400
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 423 heartbeat osd_stat(store_statfs(0x19629a000/0x0/0x1bfc00000, data 0x374c977/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 423 heartbeat osd_stat(store_statfs(0x19629a000/0x0/0x1bfc00000, data 0x374c977/0x3973000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807382 data_alloc: 234881024 data_used: 24678400
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.432376862s of 10.768292427s, submitted: 118
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196297000/0x0/0x1bfc00000, data 0x374e580/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565010432 unmapped: 86630400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5820080 data_alloc: 234881024 data_used: 25268224
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196298000/0x0/0x1bfc00000, data 0x374e580/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196298000/0x0/0x1bfc00000, data 0x374e580/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565035008 unmapped: 86605824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5828560 data_alloc: 234881024 data_used: 26120192
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.081981659s of 10.156385422s, submitted: 29
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564903936 unmapped: 86736896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564903936 unmapped: 86736896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc34b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbaf40d680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565952512 unmapped: 85688320 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 ms_handle_reset con 0x55cbad910800 session 0x55cbad892f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 heartbeat osd_stat(store_statfs(0x196292000/0x0/0x1bfc00000, data 0x3754580/0x397c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564568064 unmapped: 87072768 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbaf7983c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaeee4000 session 0x55cbaf7990e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564559872 unmapped: 87080960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5648825 data_alloc: 234881024 data_used: 21270528
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1973c1000/0x0/0x1bfc00000, data 0x26243d0/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564559872 unmapped: 87080960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1973c1000/0x0/0x1bfc00000, data 0x26243d0/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564559872 unmapped: 87080960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbad92c000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbafb2e3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbb2c1a780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbb0b421e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564568064 unmapped: 87072768 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbbe336400 session 0x55cbaf799c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40d4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbafb82d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbacc34d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbacffb4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1973c1000/0x0/0x1bfc00000, data 0x26243d0/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564772864 unmapped: 86867968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564772864 unmapped: 86867968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5745458 data_alloc: 234881024 data_used: 21278720
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.762641907s of 10.110259056s, submitted: 116
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbacffaf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaee5a780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564797440 unmapped: 86843392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbaee5a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbaee5b0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196793000/0x0/0x1bfc00000, data 0x355b442/0x347b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbaee5b860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbaee5bc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbac60e780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbb2c1a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbadb090e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb4ebc800 session 0x55cbac60f680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb0551400 session 0x55cbacf72f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 564822016 unmapped: 86818816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf40de00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5800563 data_alloc: 234881024 data_used: 21397504
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196768000/0x0/0x1bfc00000, data 0x35854a4/0x34a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbacf72000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566124544 unmapped: 85516288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567107584 unmapped: 84533248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196482000/0x0/0x1bfc00000, data 0x386b4a4/0x378c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567107584 unmapped: 84533248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb315f000 session 0x55cbaee81a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566984704 unmapped: 84656128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566984704 unmapped: 84656128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5883605 data_alloc: 234881024 data_used: 32722944
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196481000/0x0/0x1bfc00000, data 0x386b4c7/0x378d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25fef9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5885205 data_alloc: 234881024 data_used: 33001472
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567001088 unmapped: 84639744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.597194672s of 15.748700142s, submitted: 40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568762368 unmapped: 82878464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbafb2fa40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568893440 unmapped: 82747392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195eb3000/0x0/0x1bfc00000, data 0x3a504c7/0x3945000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569360384 unmapped: 82280448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569458688 unmapped: 82182144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5955807 data_alloc: 234881024 data_used: 34934784
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e16000/0x0/0x1bfc00000, data 0x3ae44c7/0x39d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e16000/0x0/0x1bfc00000, data 0x3ae44c7/0x39d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e05000/0x0/0x1bfc00000, data 0x3b044c7/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x195e05000/0x0/0x1bfc00000, data 0x3b044c7/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5946127 data_alloc: 234881024 data_used: 34942976
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570089472 unmapped: 81551360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.067697525s of 10.294413567s, submitted: 88
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570105856 unmapped: 81534976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570105856 unmapped: 81534976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbade17000 session 0x55cbaf54d680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbaf54da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570105856 unmapped: 81534976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf5a4780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570114048 unmapped: 81526784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5756880 data_alloc: 234881024 data_used: 25755648
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196cae000/0x0/0x1bfc00000, data 0x2c5c455/0x2b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571760640 unmapped: 79880192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5841448 data_alloc: 234881024 data_used: 26992640
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196437000/0x0/0x1bfc00000, data 0x34d3455/0x33c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.288564682s of 10.684764862s, submitted: 124
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572260352 unmapped: 79380480 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x196417000/0x0/0x1bfc00000, data 0x34f4455/0x33e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5844308 data_alloc: 234881024 data_used: 27033600
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaf726800 session 0x55cbafb0cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1963f2000/0x0/0x1bfc00000, data 0x3519455/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572268544 unmapped: 79372288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5844308 data_alloc: 234881024 data_used: 27033600
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbad892000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572276736 unmapped: 79364096 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.988227844s of 10.064723969s, submitted: 18
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb4ebc800 session 0x55cbad893c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaf729000 session 0x55cbaf40dc20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbacfaa400 session 0x55cbacf2a000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1963f2000/0x0/0x1bfc00000, data 0x3519455/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572227584 unmapped: 79413248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572227584 unmapped: 79413248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 heartbeat osd_stat(store_statfs(0x1963f2000/0x0/0x1bfc00000, data 0x3519432/0x340b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572227584 unmapped: 79413248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbad910800 session 0x55cbbadf4f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5840280 data_alloc: 234881024 data_used: 27025408
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbaf726800 session 0x55cbbadf5860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572235776 unmapped: 79405056 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572235776 unmapped: 79405056 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 ms_handle_reset con 0x55cbb7cc4000 session 0x55cbacf72960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572243968 unmapped: 79396864 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafaf1e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x19640c000/0x0/0x1bfc00000, data 0x31c9147/0x33f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaddbf800 session 0x55cbacdc0000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaeee9800 session 0x55cbafb90b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbad910800 session 0x55cbaf40cb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5826202 data_alloc: 234881024 data_used: 27025408
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572252160 unmapped: 79388672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaf726800 session 0x55cbafb83860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbaf726800 session 0x55cbaee65c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x1971d7000/0x0/0x1bfc00000, data 0x208b0b2/0x22b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.160833359s of 11.446761131s, submitted: 98
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 ms_handle_reset con 0x55cbad910800 session 0x55cbb0b42f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x1971d7000/0x0/0x1bfc00000, data 0x208b0b2/0x22b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 heartbeat osd_stat(store_statfs(0x1971d7000/0x0/0x1bfc00000, data 0x208b0b2/0x22b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565846016 unmapped: 85794816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 427 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf7bd0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5594847 data_alloc: 218103808 data_used: 13410304
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 428 heartbeat osd_stat(store_statfs(0x197546000/0x0/0x1bfc00000, data 0x208ea6f/0x22b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5597741 data_alloc: 218103808 data_used: 13451264
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565854208 unmapped: 85786624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565862400 unmapped: 85778432 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x197544000/0x0/0x1bfc00000, data 0x2090678/0x22b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565862400 unmapped: 85778432 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbac60f4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf729000 session 0x55cbafaf0780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565870592 unmapped: 85770240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.307628632s of 11.402697563s, submitted: 48
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf7bc5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565870592 unmapped: 85770240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5598335 data_alloc: 218103808 data_used: 13451264
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x197545000/0x0/0x1bfc00000, data 0x2090678/0x22b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565870592 unmapped: 85770240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565878784 unmapped: 85762048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbad910800 session 0x55cbb2c1b860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbafb910e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 565911552 unmapped: 85729280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbbadf4f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbb315f000 session 0x55cbacf72960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718972 data_alloc: 218103808 data_used: 14905344
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718972 data_alloc: 218103808 data_used: 14905344
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5718972 data_alloc: 218103808 data_used: 14905344
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566992896 unmapped: 84647936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687c000/0x0/0x1bfc00000, data 0x2d586da/0x2f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.328927994s of 16.574344635s, submitted: 69
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbb315f000 session 0x55cbafb82000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566861824 unmapped: 84779008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566755328 unmapped: 84885504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5805093 data_alloc: 234881024 data_used: 26980352
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569442304 unmapped: 82198528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19687b000/0x0/0x1bfc00000, data 0x2d586fd/0x2f83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5808293 data_alloc: 234881024 data_used: 27578368
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaee5a5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbad910800 session 0x55cbafb2e960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbacc35e00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569548800 unmapped: 82092032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.822283745s of 11.913968086s, submitted: 16
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbacc34d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 563961856 unmapped: 87678976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x1971b3000/0x0/0x1bfc00000, data 0x24216da/0x264b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,1,0,2,2])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567222272 unmapped: 84418560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5766327 data_alloc: 234881024 data_used: 23625728
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567287808 unmapped: 84353024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567574528 unmapped: 84066304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196831000/0x0/0x1bfc00000, data 0x2d946da/0x2fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5790443 data_alloc: 234881024 data_used: 24244224
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196831000/0x0/0x1bfc00000, data 0x2d946da/0x2fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567582720 unmapped: 84058112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.886643410s of 11.222091675s, submitted: 145
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbacfaa400 session 0x55cbbadf54a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680c000/0x0/0x1bfc00000, data 0x2dc76da/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5786943 data_alloc: 234881024 data_used: 24264704
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbade8ba40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5782023 data_alloc: 234881024 data_used: 24260608
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5782023 data_alloc: 234881024 data_used: 24260608
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567590912 unmapped: 84049920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567599104 unmapped: 84041728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567599104 unmapped: 84041728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76b7/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567599104 unmapped: 84041728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5782023 data_alloc: 234881024 data_used: 24260608
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbad910800 session 0x55cbacdc1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf5a4b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaeee9800 session 0x55cbaf7985a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.616521835s of 19.702690125s, submitted: 37
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafb82b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567607296 unmapped: 84033536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5785182 data_alloc: 234881024 data_used: 24260608
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568672256 unmapped: 82968576 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5787206 data_alloc: 234881024 data_used: 24428544
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5787206 data_alloc: 234881024 data_used: 24428544
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680d000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.876774788s of 12.909799576s, submitted: 9
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568688640 unmapped: 82952192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568696832 unmapped: 82944000 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799914 data_alloc: 234881024 data_used: 25272320
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799914 data_alloc: 234881024 data_used: 25272320
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5799914 data_alloc: 234881024 data_used: 25272320
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x2dc76c7/0x2ff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 568713216 unmapped: 82927616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.725801468s of 15.923392296s, submitted: 22
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5798742 data_alloc: 234881024 data_used: 25268224
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196805000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569761792 unmapped: 81879040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196805000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5798742 data_alloc: 234881024 data_used: 25268224
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196805000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.484737396s of 11.501891136s, submitted: 12
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5802422 data_alloc: 234881024 data_used: 26091520
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbaf726800 session 0x55cbacf2b0e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 ms_handle_reset con 0x55cbb315f000 session 0x55cbaf40d2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196808000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569769984 unmapped: 81870848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 heartbeat osd_stat(store_statfs(0x196808000/0x0/0x1bfc00000, data 0x2dcc6c7/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 handle_osd_map epochs [429,430], i have 430, src has [1,430]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 429 handle_osd_map epochs [430,430], i have 430, src has [1,430]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196803000/0x0/0x1bfc00000, data 0x2dce3fa/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807214 data_alloc: 234881024 data_used: 26099712
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbbadf4000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbadd6a960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196804000/0x0/0x1bfc00000, data 0x2dce3fa/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569786368 unmapped: 81854464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf40cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbbadf5a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569794560 unmapped: 81846272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbafb0da40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569802752 unmapped: 81838080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.845624447s of 10.049188614s, submitted: 29
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbaf726800 session 0x55cbac60f4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569802752 unmapped: 81838080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5806497 data_alloc: 234881024 data_used: 26103808
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569810944 unmapped: 81829888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196804000/0x0/0x1bfc00000, data 0x2dce3fa/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569819136 unmapped: 81821696 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbb315f000 session 0x55cbb0b43680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf5a52c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569827328 unmapped: 81813504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196038000/0x0/0x1bfc00000, data 0x359a3fa/0x37c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196038000/0x0/0x1bfc00000, data 0x359a3fa/0x37c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569827328 unmapped: 81813504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569827328 unmapped: 81813504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5867613 data_alloc: 234881024 data_used: 26107904
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 heartbeat osd_stat(store_statfs(0x196038000/0x0/0x1bfc00000, data 0x359a3fa/0x37c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569835520 unmapped: 81805312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 431 heartbeat osd_stat(store_statfs(0x196034000/0x0/0x1bfc00000, data 0x359c11d/0x37c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.457994461s of 10.397141457s, submitted: 19
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 432 ms_handle_reset con 0x55cbaf726800 session 0x55cbacf2ab40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 569851904 unmapped: 81788928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5879925 data_alloc: 234881024 data_used: 26116096
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573767680 unmapped: 77873152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 433 ms_handle_reset con 0x55cbb5fe3000 session 0x55cbafb903c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbb0b43a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb076fc00 session 0x55cbaf798780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbacdc01e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566845440 unmapped: 84795392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566845440 unmapped: 84795392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafb825a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 heartbeat osd_stat(store_statfs(0x1955ee000/0x0/0x1bfc00000, data 0x3fd9f67/0x420c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaf726800 session 0x55cbaf798d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 heartbeat osd_stat(store_statfs(0x1955ef000/0x0/0x1bfc00000, data 0x401ffc9/0x420f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566861824 unmapped: 84779008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb4ebd000 session 0x55cbacf64780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaf721000 session 0x55cbbadf4b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbacfab800 session 0x55cbaf40c5a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf5a5860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566009856 unmapped: 85630976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbb08a6c00 session 0x55cbaf40cd20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5998362 data_alloc: 234881024 data_used: 26120192
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbac60e780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 heartbeat osd_stat(store_statfs(0x1955ef000/0x0/0x1bfc00000, data 0x401ffc9/0x420f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb315ec00 session 0x55cbafb90000
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfaa400 session 0x55cbafb0c1e0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfab800 session 0x55cbbadf5680
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb5fe3000 session 0x55cbad0305a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566493184 unmapped: 85147648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566517760 unmapped: 85123072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195300000/0x0/0x1bfc00000, data 0x430ad96/0x44fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6017676 data_alloc: 234881024 data_used: 26181632
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195300000/0x0/0x1bfc00000, data 0x430ad96/0x44fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566525952 unmapped: 85114880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.260416031s of 13.620246887s, submitted: 103
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6015596 data_alloc: 234881024 data_used: 26177536
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195301000/0x0/0x1bfc00000, data 0x430ad96/0x44fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566329344 unmapped: 85311488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6037728 data_alloc: 234881024 data_used: 27869184
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.571662903s of 10.598692894s, submitted: 6
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6038080 data_alloc: 234881024 data_used: 27869184
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb315ec00 session 0x55cbafb82b40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058429 data_alloc: 234881024 data_used: 30396416
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6058077 data_alloc: 234881024 data_used: 30396416
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 566919168 unmapped: 84721664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.164316177s of 14.216246605s, submitted: 13
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 567386112 unmapped: 84254720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x195241000/0x0/0x1bfc00000, data 0x43cad96/0x45bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573087744 unmapped: 78553088 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6222717 data_alloc: 234881024 data_used: 32174080
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x1943bd000/0x0/0x1bfc00000, data 0x54f6d96/0x543d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x1943bd000/0x0/0x1bfc00000, data 0x54f6d96/0x543d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6220577 data_alloc: 234881024 data_used: 32186368
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572989440 unmapped: 78651392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb3953800 session 0x55cbade8ba40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbaf716400 session 0x55cbacc34d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaee5af00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x19439c000/0x0/0x1bfc00000, data 0x551ad96/0x5461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573005824 unmapped: 78635008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbafb914a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb08a6c00 session 0x55cbafb2ef00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.559671402s of 10.041211128s, submitted: 174
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573005824 unmapped: 78635008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbacfab800 session 0x55cbafb90780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 heartbeat osd_stat(store_statfs(0x19439d000/0x0/0x1bfc00000, data 0x551ad96/0x5461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573014016 unmapped: 78626816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573014016 unmapped: 78626816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6211869 data_alloc: 234881024 data_used: 32317440
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 ms_handle_reset con 0x55cbb08a6c00 session 0x55cbafaf1c20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573014016 unmapped: 78626816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 handle_osd_map epochs [436,436], i have 436, src has [1,436]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbaf716400 session 0x55cbafb2eb40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaee81a40
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbb315ec00 session 0x55cbaf7bd2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbb2c1b4a0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbb5fe3000 session 0x55cbadd8f2c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbacc34f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbb08a7c00 session 0x55cbaf7bc3c0
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572301312 unmapped: 79339520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf54c780
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 heartbeat osd_stat(store_statfs(0x19659d000/0x0/0x1bfc00000, data 0x2ff157a/0x31df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572325888 unmapped: 79314944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 436 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbaf716400 session 0x55cbacf2ad20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 heartbeat osd_stat(store_statfs(0x1967f1000/0x0/0x1bfc00000, data 0x2e1f57a/0x300d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572309504 unmapped: 79331328 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 heartbeat osd_stat(store_statfs(0x1967ef000/0x0/0x1bfc00000, data 0x2ddb30d/0x300e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572309504 unmapped: 79331328 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5878214 data_alloc: 234881024 data_used: 27750400
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbad910800 session 0x55cbac60e960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbaddbf800 session 0x55cbaf798960
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572317696 unmapped: 79323136 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 ms_handle_reset con 0x55cbacfaa400 session 0x55cbaf54cf00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 heartbeat osd_stat(store_statfs(0x1967f1000/0x0/0x1bfc00000, data 0x2ddb2fd/0x300d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5875573 data_alloc: 234881024 data_used: 27754496
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934582710s of 12.426671028s, submitted: 170
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572334080 unmapped: 79306752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 ms_handle_reset con 0x55cbaeee9c00 session 0x55cbafb91860
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572342272 unmapped: 79298560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 ms_handle_reset con 0x55cbb08a7c00 session 0x55cbacdc0d20
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571326464 unmapped: 80314368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571342848 unmapped: 80297984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571359232 unmapped: 80281600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571359232 unmapped: 80281600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571375616 unmapped: 80265216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571408384 unmapped: 80232448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571432960 unmapped: 80207872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config show' '{prefix=config show}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570761216 unmapped: 80879616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570114048 unmapped: 81526784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570114048 unmapped: 81526784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570458112 unmapped: 81182720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570458112 unmapped: 81182720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570458112 unmapped: 81182720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570458112 unmapped: 81182720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570458112 unmapped: 81182720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570458112 unmapped: 81182720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570466304 unmapped: 81174528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570466304 unmapped: 81174528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570466304 unmapped: 81174528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570466304 unmapped: 81174528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 81166336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 81166336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 81166336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 81166336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 81166336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570474496 unmapped: 81166336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570482688 unmapped: 81158144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570482688 unmapped: 81158144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570482688 unmapped: 81158144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570482688 unmapped: 81158144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570490880 unmapped: 81149952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570490880 unmapped: 81149952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570490880 unmapped: 81149952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570490880 unmapped: 81149952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570499072 unmapped: 81141760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570499072 unmapped: 81141760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570499072 unmapped: 81141760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570499072 unmapped: 81141760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570499072 unmapped: 81141760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570499072 unmapped: 81141760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570507264 unmapped: 81133568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570515456 unmapped: 81125376 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570523648 unmapped: 81117184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570523648 unmapped: 81117184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570531840 unmapped: 81108992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570531840 unmapped: 81108992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570531840 unmapped: 81108992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570531840 unmapped: 81108992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570531840 unmapped: 81108992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570531840 unmapped: 81108992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570540032 unmapped: 81100800 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570540032 unmapped: 81100800 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570540032 unmapped: 81100800 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570548224 unmapped: 81092608 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570548224 unmapped: 81092608 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570548224 unmapped: 81092608 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570548224 unmapped: 81092608 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570548224 unmapped: 81092608 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570556416 unmapped: 81084416 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570556416 unmapped: 81084416 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570564608 unmapped: 81076224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570564608 unmapped: 81076224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570564608 unmapped: 81076224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570564608 unmapped: 81076224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570564608 unmapped: 81076224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570564608 unmapped: 81076224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570572800 unmapped: 81068032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 80K writes, 319K keys, 80K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.06 MB/s#012Cumulative WAL: 80K writes, 29K syncs, 2.68 writes per sync, written: 0.32 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5435 writes, 20K keys, 5435 commit groups, 1.0 writes per commit group, ingest: 21.68 MB, 0.04 MB/s#012Interval WAL: 5435 writes, 2202 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbab63d350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570580992 unmapped: 81059840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570589184 unmapped: 81051648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570589184 unmapped: 81051648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570605568 unmapped: 81035264 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570605568 unmapped: 81035264 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570605568 unmapped: 81035264 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570605568 unmapped: 81035264 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570605568 unmapped: 81035264 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570605568 unmapped: 81035264 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570613760 unmapped: 81027072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570613760 unmapped: 81027072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570621952 unmapped: 81018880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570621952 unmapped: 81018880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570621952 unmapped: 81018880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570621952 unmapped: 81018880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570621952 unmapped: 81018880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570621952 unmapped: 81018880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570630144 unmapped: 81010688 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570630144 unmapped: 81010688 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570638336 unmapped: 81002496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570638336 unmapped: 81002496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570638336 unmapped: 81002496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570638336 unmapped: 81002496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570646528 unmapped: 80994304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570646528 unmapped: 80994304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570654720 unmapped: 80986112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570654720 unmapped: 80986112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570654720 unmapped: 80986112 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570662912 unmapped: 80977920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570662912 unmapped: 80977920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570662912 unmapped: 80977920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570662912 unmapped: 80977920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570662912 unmapped: 80977920 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570671104 unmapped: 80969728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 80961536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570679296 unmapped: 80961536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569405 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570687488 unmapped: 80953344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980cd000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570687488 unmapped: 80953344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570687488 unmapped: 80953344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 213.610794067s of 213.710220337s, submitted: 64
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570695680 unmapped: 80945152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570703872 unmapped: 80936960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980ce000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [0,0,0,0,0,0,1,0,1])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570736640 unmapped: 80904192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570769408 unmapped: 80871424 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570777600 unmapped: 80863232 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570785792 unmapped: 80855040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570785792 unmapped: 80855040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569373 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x1980ce000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x263ff9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570802176 unmapped: 80838656 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570810368 unmapped: 80830464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570818560 unmapped: 80822272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570818560 unmapped: 80822272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570818560 unmapped: 80822272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570818560 unmapped: 80822272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570818560 unmapped: 80822272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570826752 unmapped: 80814080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570826752 unmapped: 80814080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570826752 unmapped: 80814080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570826752 unmapped: 80814080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570826752 unmapped: 80814080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570826752 unmapped: 80814080 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570834944 unmapped: 80805888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570834944 unmapped: 80805888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570834944 unmapped: 80805888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570834944 unmapped: 80805888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570834944 unmapped: 80805888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570843136 unmapped: 80797696 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570843136 unmapped: 80797696 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570851328 unmapped: 80789504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570851328 unmapped: 80789504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570851328 unmapped: 80789504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570851328 unmapped: 80789504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570851328 unmapped: 80789504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570851328 unmapped: 80789504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570859520 unmapped: 80781312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570867712 unmapped: 80773120 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570867712 unmapped: 80773120 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570867712 unmapped: 80773120 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570867712 unmapped: 80773120 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570867712 unmapped: 80773120 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570884096 unmapped: 80756736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570892288 unmapped: 80748544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570900480 unmapped: 80740352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570900480 unmapped: 80740352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570916864 unmapped: 80723968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570916864 unmapped: 80723968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570916864 unmapped: 80723968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570916864 unmapped: 80723968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570916864 unmapped: 80723968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570916864 unmapped: 80723968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570925056 unmapped: 80715776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570933248 unmapped: 80707584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570933248 unmapped: 80707584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570941440 unmapped: 80699392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570957824 unmapped: 80683008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570966016 unmapped: 80674816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570966016 unmapped: 80674816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570966016 unmapped: 80674816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570966016 unmapped: 80674816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570966016 unmapped: 80674816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570966016 unmapped: 80674816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 80666624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 80666624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570974208 unmapped: 80666624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570982400 unmapped: 80658432 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570990592 unmapped: 80650240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570990592 unmapped: 80650240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570990592 unmapped: 80650240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570990592 unmapped: 80650240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570990592 unmapped: 80650240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570990592 unmapped: 80650240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570998784 unmapped: 80642048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570998784 unmapped: 80642048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570998784 unmapped: 80642048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570998784 unmapped: 80642048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570998784 unmapped: 80642048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 570998784 unmapped: 80642048 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571006976 unmapped: 80633856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571006976 unmapped: 80633856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 80617472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 80617472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 80617472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 80617472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 80617472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571023360 unmapped: 80617472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571031552 unmapped: 80609280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571039744 unmapped: 80601088 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571039744 unmapped: 80601088 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 80592896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571047936 unmapped: 80592896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571056128 unmapped: 80584704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571056128 unmapped: 80584704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571056128 unmapped: 80584704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571056128 unmapped: 80584704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571064320 unmapped: 80576512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571064320 unmapped: 80576512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571080704 unmapped: 80560128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 80551936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 80551936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 80551936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571088896 unmapped: 80551936 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 80543744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571097088 unmapped: 80543744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571105280 unmapped: 80535552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 80527360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571113472 unmapped: 80527360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571121664 unmapped: 80519168 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571121664 unmapped: 80519168 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571121664 unmapped: 80519168 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571121664 unmapped: 80519168 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571129856 unmapped: 80510976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571129856 unmapped: 80510976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 80502784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571138048 unmapped: 80502784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 80486400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 80486400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 80486400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 80486400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 80486400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571154432 unmapped: 80486400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571162624 unmapped: 80478208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571179008 unmapped: 80461824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571179008 unmapped: 80461824 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571195392 unmapped: 80445440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571203584 unmapped: 80437248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571203584 unmapped: 80437248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571203584 unmapped: 80437248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571203584 unmapped: 80437248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571203584 unmapped: 80437248 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571211776 unmapped: 80429056 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571219968 unmapped: 80420864 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571219968 unmapped: 80420864 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571228160 unmapped: 80412672 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571244544 unmapped: 80396288 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571260928 unmapped: 80379904 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571260928 unmapped: 80379904 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571269120 unmapped: 80371712 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 80363520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 80363520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 80363520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 80363520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571277312 unmapped: 80363520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 80355328 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571285504 unmapped: 80355328 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571293696 unmapped: 80347136 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 80338944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 80338944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 80338944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 80338944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571301888 unmapped: 80338944 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 80330752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 80330752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571310080 unmapped: 80330752 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571318272 unmapped: 80322560 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571334656 unmapped: 80306176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571351040 unmapped: 80289792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571367424 unmapped: 80273408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571383808 unmapped: 80257024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571392000 unmapped: 80248832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571400192 unmapped: 80240640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571416576 unmapped: 80224256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571424768 unmapped: 80216064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571441152 unmapped: 80199680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571449344 unmapped: 80191488 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 80183296 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571457536 unmapped: 80183296 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571465728 unmapped: 80175104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571465728 unmapped: 80175104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571473920 unmapped: 80166912 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571473920 unmapped: 80166912 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571473920 unmapped: 80166912 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571473920 unmapped: 80166912 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 ms_handle_reset con 0x55cbaeee4800 session 0x55cbafaf0f00
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571482112 unmapped: 80158720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571482112 unmapped: 80158720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571490304 unmapped: 80150528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571490304 unmapped: 80150528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571490304 unmapped: 80150528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571490304 unmapped: 80150528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571490304 unmapped: 80150528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571490304 unmapped: 80150528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571498496 unmapped: 80142336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571498496 unmapped: 80142336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 80134144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571506688 unmapped: 80134144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 80125952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 80125952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 80125952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 80125952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 80125952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571514880 unmapped: 80125952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 80117760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 80117760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 80117760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 80117760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 80117760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571523072 unmapped: 80117760 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 80109568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571531264 unmapped: 80109568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571547648 unmapped: 80093184 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571555840 unmapped: 80084992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571555840 unmapped: 80084992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571555840 unmapped: 80084992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571555840 unmapped: 80084992 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 80076800 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571564032 unmapped: 80076800 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571580416 unmapped: 80060416 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571580416 unmapped: 80060416 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571580416 unmapped: 80060416 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571580416 unmapped: 80060416 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 80052224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 80052224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 80052224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 80052224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 80052224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571588608 unmapped: 80052224 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 80044032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 80044032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 80044032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 80044032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571596800 unmapped: 80044032 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571604992 unmapped: 80035840 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571613184 unmapped: 80027648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571613184 unmapped: 80027648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571613184 unmapped: 80027648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571613184 unmapped: 80027648 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571621376 unmapped: 80019456 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571621376 unmapped: 80019456 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571621376 unmapped: 80019456 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571621376 unmapped: 80019456 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571637760 unmapped: 80003072 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571645952 unmapped: 79994880 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571654144 unmapped: 79986688 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571662336 unmapped: 79978496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571662336 unmapped: 79978496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571662336 unmapped: 79978496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571662336 unmapped: 79978496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571662336 unmapped: 79978496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571662336 unmapped: 79978496 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571670528 unmapped: 79970304 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571695104 unmapped: 79945728 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571703296 unmapped: 79937536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571703296 unmapped: 79937536 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 79929344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 79929344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 79929344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571711488 unmapped: 79929344 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 79921152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 79921152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 79921152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571719680 unmapped: 79921152 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571727872 unmapped: 79912960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571727872 unmapped: 79912960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571727872 unmapped: 79912960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571727872 unmapped: 79912960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571727872 unmapped: 79912960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571727872 unmapped: 79912960 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571744256 unmapped: 79896576 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571744256 unmapped: 79896576 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 79888384 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 79888384 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 79888384 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 79888384 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571752448 unmapped: 79888384 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571760640 unmapped: 79880192 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571768832 unmapped: 79872000 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571768832 unmapped: 79872000 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571777024 unmapped: 79863808 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571785216 unmapped: 79855616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571785216 unmapped: 79855616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571785216 unmapped: 79855616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571785216 unmapped: 79855616 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571793408 unmapped: 79847424 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571793408 unmapped: 79847424 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571809792 unmapped: 79831040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571809792 unmapped: 79831040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571809792 unmapped: 79831040 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571817984 unmapped: 79822848 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571826176 unmapped: 79814656 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571826176 unmapped: 79814656 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571826176 unmapped: 79814656 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571826176 unmapped: 79814656 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571834368 unmapped: 79806464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571834368 unmapped: 79806464 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571842560 unmapped: 79798272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571842560 unmapped: 79798272 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571858944 unmapped: 79781888 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571867136 unmapped: 79773696 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571867136 unmapped: 79773696 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571875328 unmapped: 79765504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571875328 unmapped: 79765504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571875328 unmapped: 79765504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571875328 unmapped: 79765504 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571883520 unmapped: 79757312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571883520 unmapped: 79757312 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571899904 unmapped: 79740928 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571908096 unmapped: 79732736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571908096 unmapped: 79732736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571908096 unmapped: 79732736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571908096 unmapped: 79732736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571908096 unmapped: 79732736 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571916288 unmapped: 79724544 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571924480 unmapped: 79716352 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571932672 unmapped: 79708160 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571932672 unmapped: 79708160 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571940864 unmapped: 79699968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571940864 unmapped: 79699968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571940864 unmapped: 79699968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571940864 unmapped: 79699968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571940864 unmapped: 79699968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571949056 unmapped: 79691776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571949056 unmapped: 79691776 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571957248 unmapped: 79683584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571957248 unmapped: 79683584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571957248 unmapped: 79683584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571957248 unmapped: 79683584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571957248 unmapped: 79683584 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571965440 unmapped: 79675392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571965440 unmapped: 79675392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571965440 unmapped: 79675392 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571973632 unmapped: 79667200 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571973632 unmapped: 79667200 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571973632 unmapped: 79667200 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571973632 unmapped: 79667200 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571973632 unmapped: 79667200 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571973632 unmapped: 79667200 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571981824 unmapped: 79659008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571981824 unmapped: 79659008 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571990016 unmapped: 79650816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571990016 unmapped: 79650816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571990016 unmapped: 79650816 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571998208 unmapped: 79642624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 571998208 unmapped: 79642624 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572006400 unmapped: 79634432 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572014592 unmapped: 79626240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572014592 unmapped: 79626240 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572030976 unmapped: 79609856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572030976 unmapped: 79609856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572030976 unmapped: 79609856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572030976 unmapped: 79609856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572030976 unmapped: 79609856 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572039168 unmapped: 79601664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572039168 unmapped: 79601664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572039168 unmapped: 79601664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572039168 unmapped: 79601664 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572047360 unmapped: 79593472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572047360 unmapped: 79593472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572047360 unmapped: 79593472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572047360 unmapped: 79593472 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572055552 unmapped: 79585280 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572063744 unmapped: 79577088 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572063744 unmapped: 79577088 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572071936 unmapped: 79568896 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.4 total, 600.0 interval#012Cumulative writes: 80K writes, 319K keys, 80K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s#012Cumulative WAL: 80K writes, 30K syncs, 2.68 writes per sync, written: 0.32 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 468 writes, 727 keys, 468 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s#012Interval WAL: 468 writes, 228 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572080128 unmapped: 79560704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572080128 unmapped: 79560704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572080128 unmapped: 79560704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572080128 unmapped: 79560704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572080128 unmapped: 79560704 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572088320 unmapped: 79552512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572088320 unmapped: 79552512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572088320 unmapped: 79552512 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572096512 unmapped: 79544320 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572096512 unmapped: 79544320 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572104704 unmapped: 79536128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572104704 unmapped: 79536128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572104704 unmapped: 79536128 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572121088 unmapped: 79519744 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572129280 unmapped: 79511552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572129280 unmapped: 79511552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572129280 unmapped: 79511552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572129280 unmapped: 79511552 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572137472 unmapped: 79503360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572137472 unmapped: 79503360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572137472 unmapped: 79503360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572137472 unmapped: 79503360 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572145664 unmapped: 79495168 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572145664 unmapped: 79495168 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572153856 unmapped: 79486976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572153856 unmapped: 79486976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572153856 unmapped: 79486976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572153856 unmapped: 79486976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572153856 unmapped: 79486976 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572162048 unmapped: 79478784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572162048 unmapped: 79478784 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572170240 unmapped: 79470592 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 79462400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 79462400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572178432 unmapped: 79462400 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572186624 unmapped: 79454208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572186624 unmapped: 79454208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 593.813598633s of 600.227172852s, submitted: 240
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572186624 unmapped: 79454208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572186624 unmapped: 79454208 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572211200 unmapped: 79429632 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572219392 unmapped: 79421440 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573325312 unmapped: 78315520 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573341696 unmapped: 78299136 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573374464 unmapped: 78266368 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573382656 unmapped: 78258176 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573390848 unmapped: 78249984 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573399040 unmapped: 78241792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573399040 unmapped: 78241792 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573407232 unmapped: 78233600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573407232 unmapped: 78233600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573407232 unmapped: 78233600 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573415424 unmapped: 78225408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573415424 unmapped: 78225408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573415424 unmapped: 78225408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573415424 unmapped: 78225408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573415424 unmapped: 78225408 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573423616 unmapped: 78217216 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573431808 unmapped: 78209024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573431808 unmapped: 78209024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573431808 unmapped: 78209024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573431808 unmapped: 78209024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573431808 unmapped: 78209024 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573440000 unmapped: 78200832 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573448192 unmapped: 78192640 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573456384 unmapped: 78184448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573456384 unmapped: 78184448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573456384 unmapped: 78184448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573456384 unmapped: 78184448 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573464576 unmapped: 78176256 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 78168064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 78168064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 78168064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 78168064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573472768 unmapped: 78168064 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573480960 unmapped: 78159872 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 78151680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 78151680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 78151680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 78151680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 78151680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573489152 unmapped: 78151680 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573505536 unmapped: 78135296 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573505536 unmapped: 78135296 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573521920 unmapped: 78118912 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573521920 unmapped: 78118912 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573530112 unmapped: 78110720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573530112 unmapped: 78110720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573530112 unmapped: 78110720 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573538304 unmapped: 78102528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573538304 unmapped: 78102528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573538304 unmapped: 78102528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573538304 unmapped: 78102528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573538304 unmapped: 78102528 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573546496 unmapped: 78094336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573546496 unmapped: 78094336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573546496 unmapped: 78094336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573546496 unmapped: 78094336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573546496 unmapped: 78094336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573546496 unmapped: 78094336 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573554688 unmapped: 78086144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573554688 unmapped: 78086144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573554688 unmapped: 78086144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573554688 unmapped: 78086144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573554688 unmapped: 78086144 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573562880 unmapped: 78077952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573562880 unmapped: 78077952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573562880 unmapped: 78077952 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573579264 unmapped: 78061568 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: osd.2 438 heartbeat osd_stat(store_statfs(0x197cbe000/0x0/0x1bfc00000, data 0x14fdedc/0x1730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573669376 unmapped: 77971456 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config show' '{prefix=config show}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 573513728 unmapped: 78127104 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: prioritycache tune_memory target: 4294967296 mapped: 572964864 unmapped: 78675968 heap: 651640832 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: bluestore.MempoolThread(0x55cbab71bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5569229 data_alloc: 218103808 data_used: 10874880
Nov 29 04:08:56 np0005539552 ceph-osd[79800]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:08:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:08:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/161660436' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:08:56 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:08:56 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2128318705' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:08:56 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:56 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:56 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:56.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:57 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:57 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:57 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:08:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/428928874' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:08:57 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:08:57 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1939120068' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:08:58 np0005539552 nova_compute[233724]: 2025-11-29 09:08:58.105 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:58 np0005539552 nova_compute[233724]: 2025-11-29 09:08:58.110 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:58 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:08:58 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3058344630' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:08:58 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:58 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:58 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:58.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:59 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:08:59 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:59 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:08:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/574202613' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:08:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:08:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/319665089' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:08:59 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:08:59 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2608874418' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:09:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:09:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3173161959' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:09:00 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:09:00 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2989489749' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:09:00 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:00 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:00 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:00.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:01 np0005539552 systemd[1]: Starting Hostname Service...
Nov 29 04:09:01 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:01 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:01 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:01 np0005539552 systemd[1]: Started Hostname Service.
Nov 29 04:09:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:09:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2230525528' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:09:01 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:09:01 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1732207971' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455739261' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2698641814' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/341243842' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947509329' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/458619485' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2194772314' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:09:02 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3808616842' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:09:02 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:02 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.003000081s ======
Nov 29 04:09:02 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:02.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Nov 29 04:09:03 np0005539552 nova_compute[233724]: 2025-11-29 09:09:03.107 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:03 np0005539552 nova_compute[233724]: 2025-11-29 09:09:03.113 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:03 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:03 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:03 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:03.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3377755616' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4153528042' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.561481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343561559, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2512, "num_deletes": 251, "total_data_size": 6022745, "memory_usage": 6120184, "flush_reason": "Manual Compaction"}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343591074, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 3941526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84467, "largest_seqno": 86974, "table_properties": {"data_size": 3930997, "index_size": 6638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23583, "raw_average_key_size": 21, "raw_value_size": 3909351, "raw_average_value_size": 3518, "num_data_blocks": 288, "num_entries": 1111, "num_filter_entries": 1111, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407120, "oldest_key_time": 1764407120, "file_creation_time": 1764407343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 29649 microseconds, and 8179 cpu microseconds.
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.591136) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 3941526 bytes OK
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.591158) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.598563) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.598586) EVENT_LOG_v1 {"time_micros": 1764407343598579, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.598606) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 6011330, prev total WAL file size 6011330, number of live WAL files 2.
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.600148) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(3849KB)], [174(11MB)]
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343600173, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 16080201, "oldest_snapshot_seqno": -1}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3484172485' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 11863 keys, 14092246 bytes, temperature: kUnknown
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343675540, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14092246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14017890, "index_size": 43587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 313772, "raw_average_key_size": 26, "raw_value_size": 13812131, "raw_average_value_size": 1164, "num_data_blocks": 1647, "num_entries": 11863, "num_filter_entries": 11863, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400508, "oldest_key_time": 0, "file_creation_time": 1764407343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb0bb604-2277-410b-a16a-74e952f23481", "db_session_id": "AE0I8NBVSMFLPQVY2DCL", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.675790) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14092246 bytes
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.678254) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.2 rd, 186.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.6 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 12380, records dropped: 517 output_compression: NoCompression
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.678270) EVENT_LOG_v1 {"time_micros": 1764407343678262, "job": 112, "event": "compaction_finished", "compaction_time_micros": 75427, "compaction_time_cpu_micros": 29578, "output_level": 6, "num_output_files": 1, "total_output_size": 14092246, "num_input_records": 12380, "num_output_records": 11863, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343678928, "job": 112, "event": "table_file_deletion", "file_number": 176}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343680946, "job": 112, "event": "table_file_deletion", "file_number": 174}
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.600069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.680997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.681004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.681006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.681008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539552 ceph-mon[77121]: rocksdb: (Original Log Time 2025/11/29-09:09:03.681010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:09:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2921893938' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:09:04 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1710411712' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:09:04 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:04 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:04 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:04.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:05 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:05 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:05 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:05.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:09:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/947602147' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:09:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:09:05 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:09:05 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/352458304' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:09:06 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:09:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2780353900' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:09:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:09:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:06 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:09:06 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:06 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:06 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 04:09:07 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1058461162' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 04:09:07 np0005539552 radosgw[83248]: ====== starting new request req=0x7fec965d86f0 =====
Nov 29 04:09:07 np0005539552 radosgw[83248]: ====== req done req=0x7fec965d86f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:07 np0005539552 radosgw[83248]: beast: 0x7fec965d86f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:07.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:07 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 04:09:07 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1610929696' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 04:09:08 np0005539552 nova_compute[233724]: 2025-11-29 09:09:08.109 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:08 np0005539552 nova_compute[233724]: 2025-11-29 09:09:08.114 233728 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:08 np0005539552 ceph-mon[77121]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 04:09:08 np0005539552 ceph-mon[77121]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2852379421' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
